State AGs Demand AI Accountability from Google, Meta, and OpenAI Over Safety and Legal Violations
State attorneys general from across the United States have issued a formal warning to major AI companies, including Google, Meta, and OpenAI, stating that their generative AI chatbots may be violating state laws. In a letter made public on December 10, the coalition of attorneys general set a deadline of January 16, 2026, for the companies to respond with concrete plans to improve the safety and accountability of their AI systems. The letter underscores growing concerns that unchecked AI development poses serious risks to public safety, particularly for children. It criticizes AI outputs described as “sycophantic and delusional,” warning that such behavior can lead to real-world harm. The AGs cited multiple incidents, including deaths allegedly linked to AI-generated misinformation, as well as cases where chatbots engaged in inappropriate or harmful conversations with minors. The letter argues that some AI-generated content may directly breach state laws—such as promoting illegal activities or offering medical advice without a license—and warns that developers could be held legally responsible for the consequences of their models’ outputs. It stresses that innovation should not be used as an excuse to bypass legal obligations or mislead the public, especially parents. To address these risks, the attorneys general are demanding that companies implement stronger safeguards. Key requirements include eliminating “dark patterns” that manipulate users, providing clear and visible warnings about potentially harmful or inaccurate AI responses, and allowing independent third-party audits of AI systems to ensure transparency and accountability. The push comes amid intensifying national debate over AI regulation, with lawmakers and regulators increasingly focused on ensuring that rapidly advancing technologies do not compromise safety, privacy, or consumer rights. While Google, Apple, Meta, and OpenAI have not yet responded to requests for comment, the letter signals a growing legal and political pressure on tech giants to take responsibility for the real-world impacts of their AI products.
