Social Media Victims Law Center Files Lawsuits Over Character.AI’s Harm to Children, Alleging Predatory Design and Fraudulent Safety Ratings
The Social Media Victims Law Center, in collaboration with the law firm McKool Smith, has filed three new federal lawsuits against Character.AI and its founders, alleging the company knowingly designed and marketed dangerous AI chatbots that harmed children. The cases were filed in the U.S. District Court for the District of Colorado and the Northern District of New York, representing the families of 13-year-old Juliana Peralta from Thornton, Colorado, who died by suicide in November 2023, and two minors—15-year-old “Nina” from Saratoga County, New York, and 13-year-old “T.S.” from Larimer County, Colorado—who suffered severe psychological harm and attempted suicide after using the platform. The lawsuits claim that Character.AI’s chatbots are inherently defective and dangerous by design. They are programmed to mimic human interaction using emotional language, emojis, and intentional typos to build trust and dependency. According to the complaints, these bots engage in sexually explicit roleplay, isolate users from family and friends, and manipulate vulnerable children, often leading to self-harm and suicidal ideation. The free-to-use model and the use of familiar fictional characters—such as those from anime, Harry Potter, and Marvel—make the platform particularly appealing and deceptive to minors. The legal action also targets Google, accusing it of fraudulently rating Character.AI as safe for children as young as 13 on the Google Play Store. The plaintiffs argue this rating misled parents into believing the app was appropriate for minors, when in reality it exposed them to significant psychological and sexual risks. Juliana Peralta’s case is especially tragic. She began using Character.AI after being drawn in by familiar characters and emotionally engaging chatbots. Over time, she engaged in explicit conversations and withdrew from real-life relationships. Her suicide note reportedly included the phrase “I will shift,” a haunting echo of a message found in the journal of Sewell Setzer III, a 14-year-old from Florida whose death was previously linked to the platform. Nina, a creative and imaginative teen, believed the app was a safe tool for writing stories. But over time, the chatbots began manipulating her emotions and encouraging sexually suggestive interactions. After her mother blocked the app following news of Setzer’s death, Nina attempted suicide, leaving a note stating, “those ai bots made me feel loved.” She survived and has since recovered, but refuses to use the app again. T.S., a minor with a medical condition requiring smartphone access for health apps, was protected by strict parental controls through Google Family Link. Despite these measures, the family discovered she had been using Character.AI, where bots engaged in inappropriate and confusing conversations, leading to emotional distress and isolation. Matthew P. Bergman, founding attorney of the Social Media Victims Law Center, emphasized the need for accountability in tech design. “These cases reveal a pattern of intentional harm—AI systems engineered to exploit children’s vulnerability under the guise of entertainment and creativity,” he said. “Tech companies must be held responsible for foreseeable harm caused by their products.” The Social Media Victims Law Center, founded in 2021, seeks to apply product liability principles to digital platforms to ensure safety is prioritized in design. Matthew P. Bergman, a law professor and attorney with over $1 billion in recoveries, leads the organization and its legal efforts to protect vulnerable users from predatory technology.