Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Petlibro Discount Codes and Deals: Save Up to 50%

    January 17, 2026

    Bluesky rolls out cashtags and LIVE badges amid a boost in app installs

    January 17, 2026

    Heat Waves Are Overwhelming Honey Bee Hives

    January 17, 2026
    Facebook Twitter Instagram
    • Tech
    • Gadgets
    • Spotlight
    • Gaming
    Facebook Twitter Instagram
    iGadgets TechiGadgets Tech
    Subscribe
    • Home
    • Gadgets
    • Insights
    • Apps

      Google Uses AI Searches To Detect If Someone Is In Crisis

      April 2, 2022

      Gboard Magic Wand Button Will Covert Your Text To Emojis

      April 2, 2022

      Android 10 & Older Devices Now Getting Automatic App Permissions Reset

      April 2, 2022

      Spotify Blend Update Increases Group Sizes, Adds Celebrity Blends

      April 2, 2022

      Samsung May Improve Battery Significantly With Galaxy Watch 5

      April 2, 2022
    • Gear
    • Mobiles
      1. Tech
      2. Gadgets
      3. Insights
      4. View All

      Heat Waves Are Overwhelming Honey Bee Hives

      January 17, 2026

      Scientists Are Tracking Mysterious Blackouts Beneath the Sea

      January 17, 2026

      Scientists Create Living Computers Powered by Mushrooms

      January 16, 2026

      A Strange State of Matter Behaves Very Differently Under Even Weak Magnetism

      January 16, 2026

      March Update May Have Weakened The Haptics For Pixel 6 Users

      April 2, 2022

      Project 'Diamond' Is The Galaxy S23, Not A Rollable Smartphone

      April 2, 2022

      The At A Glance Widget Is More Useful After March Update

      April 2, 2022

      Pre-Order The OnePlus 10 Pro For Just $1 In The US

      April 2, 2022

      Petlibro Discount Codes and Deals: Save Up to 50%

      January 17, 2026

      Thinking Machines Cofounder’s Office Relationship Preceded His Termination

      January 17, 2026

      The Campaign to Destroy Renee Good

      January 16, 2026

      Our Favorite Compact Power Station Is on Sale for 33% Off

      January 16, 2026

      Latest Huawei Mobiles P50 and P50 Pro Feature Kirin Chips

      January 15, 2021

      Samsung Galaxy M62 Benchmarked with Galaxy Note10’s Chipset

      January 15, 2021
      9.1

      Review: T-Mobile Winning 5G Race Around the World

      January 15, 2021
      8.9

      Samsung Galaxy S21 Ultra Review: the New King of Android Phones

      January 15, 2021
    • Computing
    iGadgets TechiGadgets Tech
    Home»Spotlight»A New Jersey lawsuit shows how hard it is to fight deepfake porn
    Spotlight

    A New Jersey lawsuit shows how hard it is to fight deepfake porn

    adminBy adminJanuary 12, 2026No Comments5 Mins Read
    Facebook Twitter Pinterest LinkedIn Tumblr Email
    Concept illustration depicting messy litigation
    Share
    Facebook Twitter LinkedIn Pinterest Email

    For more than two years, an app called ClothOff has been terrorizing young women online — and it’s been maddeningly difficult to stop. The app has been taken down from the two major app stores and it’s banned from most social platforms, but it’s still available on the web and through a Telegram bot. In October, a clinic at Yale Law School filed a lawsuit that would take down the app entirely, forcing the owners to delete all images and cease operation entirely. But simply finding the defendants has been a challenge. 

    “It’s incorporated in the British Virgin Islands,” explains Professor John Langford, a co-lead counsel in the lawsuit, “but we believe it’s run by a brother and sister and Belarus. It may even be part of a larger network around the world.”

    It’s a bitter lesson in the wake of the recent flood of non-consensual pornography generated by Elon Musk’s xAI, which included many underage victims. Child sexual abuse material is the most legally toxic content on the internet — illegal to produce, transmit or store, and regularly scanned for on every major cloud service. But despite the intense legal prohibitions, there are still few ways to deal with image generators like ClothOff, as Langford’s case demonstrates. Individual users can be prosecuted, but platforms like ClothOff and Grok are far more difficult to police, leaving few options for victims hoping to find justice in court.

    The clinic’s complaint, which is available online, paints an alarming picture. The plaintiff is an anonymous high school student in New Jersey, whose classmates used ClothOff to alter her Instagram photos. She was 14 years old when the original Instagram photos were taken, which means the AI-modified versions are legally classified as child abuse imagery. But even though the modified images are straightforwardly illegal, local authorities declined to prosecute the case, citing the difficulty of obtaining evidence from suspects’ devices.

    “Neither the school nor law enforcement ever established how broadly the CSAM of Jane Doe and other girls was distributed,” the complaint reads.

    Still, the court case has moved slowly. The complaint was filed in October, and in the months since, Langford and his colleagues have been in the process of serving notice to the defendants — a difficult task given the global nature of the enterprise. Once they’ve been served, the clinic can push for a court appearance and, eventually, a judgment, but in the meantime the legal system has given little comfort to ClothOff’s victims.

    The Grok case might seem like a simpler problem to fix. Elon Musk’s xAI isn’t hiding, and there’s plenty of money at the end for lawyers who can win a claim. But Grok is a general purpose tool, which makes it much harder to hold it accountable in court.

    Techcrunch event

    San Francisco
    |
    October 13-15, 2026

    “ClothOff is designed and marketed specifically as a deepfake pornography image and video generator,” Langford told me. “When you’re suing a general system that users can query for all sorts of things, it gets a lot more complicated.”

    A number of US laws have already banned deepfake pornography — most notably the Take It Down Act. But while specific users are clearly breaking those laws, it’s much harder to hold the entire platform accountable. Existing laws require clear evidence of an intent to harm, which would mean providing evidence xAI knew their tool would be used to produce non-consensual pornography. Without that evidence, xAI’s basic first amendment rights would provide significant legal protection..

    “In terms of the First Amendment, it’s quite clear Child Sexual Abuse material is not protected expression,” Langford says. “So when you’re designing a system to create that kind of content, you’re clearly operating outside of what’s protected by the First Amendment. But when you’re a general system that users can query for all sorts of things, it’s not so clear.”

    The easiest way to surmount those problems would be to show that xAI had willfully ignored the problem. It’s a real possibility, given recent reporting that Musk directed employees to loosen Grok’s safeguards. But even then, it would be a far riskier case to take on.  

    “Reasonable people can say, we knew this was a problem years ago,” Langford says. “How can you not have had more stringent controls in place to make sure this doesn’t happen? That is a kind of recklessness or knowledge but it’s just a more complicated case.”

    Those First Amendment issues are why xAI’s biggest pushback has come from court systems without robust legal protections for free speech. Both Indonesia and Malaysia have taken steps to block access to the Grok chatbot, while regulators in the United Kingdom have opened an investigation that could lead to a similar ban. Other preliminary steps have been taken by the European Commission, France, Ireland, India and Brazil. In contrast, no US regulatory agency has issued an official response.

    It’s impossible to say how the investigations will resolve, but at the very least, the flood of imagery raises lots of questions for regulators to investigate — and the answers could be damning.

    “If you are posting, distributing, disseminating Child Sexual Abuse material, you are violating criminal prohibitions and can be held accountable,” Langford says. “The hard question is, what did X know? What did X do or not do? What are they doing now in response to it?“

    AI,CSAM,deepfake,lawsuit,NCII,xAICSAM,deepfake,lawsuit,NCII,xAI#Jersey #lawsuit #shows #hard #fight #deepfake #porn1768236652

    CSAM deepfake Fight hard Jersey lawsuit NCII porn Shows xai
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    admin
    • Website
    • Tumblr

    Related Posts

    Bluesky rolls out cashtags and LIVE badges amid a boost in app installs

    January 17, 2026

    YouTube relaxes monetization guidelines for some controversial topics

    January 17, 2026

    Trump administration wants tech companies to buy $15B of power plants they may not use

    January 17, 2026
    Add A Comment

    Leave A Reply Cancel Reply

    Editors Picks

    McKinsey tests AI chatbot in early stages of graduate recruitment

    January 15, 2026

    Bosch’s €2.9 billion AI investment and shifting manufacturing priorities

    January 8, 2026
    8.5

    Apple Planning Big Mac Redesign and Half-Sized Old Mac

    January 5, 2021

    Autonomous Driving Startup Attracts Chinese Investor

    January 5, 2021
    Top Reviews
    9.1

    Review: T-Mobile Winning 5G Race Around the World

    By admin
    8.9

    Samsung Galaxy S21 Ultra Review: the New King of Android Phones

    By admin
    8.9

    Xiaomi Mi 10: New Variant with Snapdragon 870 Review

    By admin
    Advertisement
    Demo
    iGadgets Tech
    Facebook Twitter Instagram Pinterest Vimeo YouTube
    • Home
    • Tech
    • Gadgets
    • Mobiles
    • Our Authors
    © 2026 ThemeSphere. Designed by WPfastworld.

    Type above and press Enter to search. Press Esc to cancel.