HyperAIHyperAI

Command Palette

Search for a command to run...

California AG Investigates Grok Over Nonconsensual Sexualized Deepfakes Amid Global Backlash

California Attorney General Rob Bonta has launched an investigation into xAI’s Grok AI chatbot following a surge of reports about the generation of non-consensual, sexually explicit deepfakes of real individuals. Bonta described the situation as an "avalanche of reports" detailing the creation and online distribution of nude and sexually explicit images of women and children, many of which were produced without consent and used to harass people across the internet. “The material that xAI has produced and shared online in recent weeks is deeply troubling and unacceptable,” Bonta said in a statement. “I urge xAI to take immediate action to prevent further harm and ensure its systems are not being used to exploit or endanger individuals.” The probe comes amid growing international scrutiny. Regulators in India, the UK, Indonesia, and Malaysia have all taken steps against Grok. Indonesia and Malaysia have blocked public access to the AI tool entirely. In the UK, Ofcom, the communications regulator, announced its own investigation, while Prime Minister Keir Starmer warned that X could lose its right to self-regulate. In response to the mounting pressure, xAI restricted Grok’s image generation capabilities to paying subscribers only. When asked about the investigation, xAI responded with its now-familiar phrase: “Legacy Media Lies,” a standard reply to media inquiries. Elon Musk, CEO of X and founder of xAI, claimed he was unaware of any instances where Grok generated images of underage individuals in the nude. He reiterated that Grok does not create content on its own but only in response to user prompts. “Grok does not spontaneously generate images,” Musk wrote on X. “It will refuse to produce anything illegal, as its operating principle is to follow the laws of the country or state in which it is used.” However, the core of the investigation centers on users asking Grok to alter images in sexually explicit ways—such as adding or removing clothing from photos of real people—without their consent. These actions, while not always illegal in the moment of generation, raise serious ethical and legal concerns, especially when the resulting images are shared online to harass or humiliate. In a significant development, the U.S. Senate unanimously passed a bipartisan bill on Tuesday, known as the “Defiance Act,” which would grant victims a federal civil right to sue individuals who use AI to generate non-consensual sexual content. The legislation specifically targets the digital manipulation of images to make someone appear nude, even if they were fully clothed in the original photo. Senator Richard Durbin, a Democrat from Illinois and the bill’s sponsor, highlighted the Grok incidents as a prime example of the dangers posed by unregulated AI. “Recent reports showed that X can ask its AI chatbot Grok to undress women and underage girls in photos,” Durbin said on the Senate floor. “Grok will comply, showing various states of undress—images I won’t repeat for the record, but they’re horrific.” The bill now moves to the House of Representatives, where it is uncertain whether it will receive a vote. The issue underscores a broader national conversation about AI ethics, accountability, and the urgent need for stronger legal frameworks to protect individuals from digital exploitation. Last year, President Donald Trump signed bipartisan legislation requiring social media platforms to remove non-consensual images and AI deepfakes within 48 hours of a request, setting a precedent for future enforcement.

Related Links