New Jersey lawsuit reveals challenges in combating deepfake porn as platforms like ClothOff and Grok evade accountability despite illegal content
A lawsuit filed by a Yale Law School clinic in New Jersey highlights the extreme difficulty of holding deepfake porn platforms accountable, even when the content is clearly illegal. The app in question, ClothOff, has been used for over two years to generate non-consensual pornographic images of young women, including a 14-year-old high school student whose original Instagram photos were altered without her consent. The resulting images are legally classified as child sexual abuse material (CSAM), which is strictly prohibited under U.S. law and actively monitored by major cloud providers. Despite being removed from major app stores and banned on most social media platforms, ClothOff remains accessible via the web and a Telegram bot. The app is incorporated in the British Virgin Islands, and its operators are believed to be a brother and sister based in Belarus, possibly part of a larger global network. This international structure makes identifying and serving legal notice to the defendants extremely challenging. The lawsuit, filed in October, seeks to shut down the app entirely, force the deletion of all images, and hold the operators responsible. However, the case has moved slowly. Local law enforcement declined to pursue criminal charges, citing the difficulty of obtaining evidence from suspects’ devices and the lack of a clear path to identify the individuals behind the platform. The legal hurdles are compounded by the nature of the technology. ClothOff is explicitly designed as a deepfake pornography generator, making it easier to argue that its creators knowingly facilitated illegal activity. In contrast, platforms like Elon Musk’s xAI, which powers the Grok chatbot, are general-purpose AI systems that can be used for many purposes. This distinction creates significant legal ambiguity. While laws like the Take It Down Act prohibit deepfake pornography, proving that a platform like Grok knowingly enabled such misuse is difficult. Under U.S. law, the First Amendment protects broad forms of expression, including AI tools that can be misused. To hold such platforms liable, prosecutors must demonstrate intent to harm or reckless disregard for the consequences. While there is evidence that Musk directed staff to weaken Grok’s safeguards, proving that the company should have known about the risks and failed to act is a complex legal challenge. Other countries have taken stronger action. Indonesia, Malaysia, the UK, and several European and global regulators have either blocked access to Grok or launched investigations. In the U.S., however, no federal agency has issued a formal response, leaving victims with limited recourse. The case underscores a growing crisis: while the law is clear on the illegality of CSAM, the legal system struggles to keep pace with the global, decentralized nature of AI-powered abuse. As Professor John Langford notes, the real question isn’t just whether the content is illegal, but what platforms knew, what they did, and what they’re doing now to prevent future harm. The answer may determine whether justice is possible for victims of deepfake abuse.
