HyperAIHyperAI

Command Palette

Search for a command to run...

California’s New AI Law Weakens Disclosure Rules After S.B. 1047 Failure

California’s newly enacted AI legislation has delivered a major win for Big Tech, effectively watering down earlier, more stringent proposals that had sparked intense debate. The law, which replaces the failed S.B. 1047, removes a key requirement that companies disclose when content is generated by artificial intelligence, a provision that had been seen as a critical step toward transparency. Instead of mandating clear labeling of AI-generated content across platforms, the new law introduces a more limited disclosure framework. It requires companies to inform users if AI is used in specific, narrow contexts—such as when AI is used to generate content that could mislead people about a product, service, or political message. However, the law includes broad exemptions, allowing companies to avoid disclosure if they claim the content is “transformative” or “creative,” terms that remain undefined and open to interpretation. Critics argue that these loopholes undermine the law’s effectiveness. Advocates for transparency and digital rights say the absence of a broad, enforceable disclosure mandate means consumers may continue to be misled by AI-generated content without knowing it. The law also includes a “kill switch” provision that allows companies to opt out of disclosure requirements entirely if they implement internal risk assessments, a mechanism that critics say gives Big Tech unchecked power to self-regulate. The shift from S.B. 1047—drafted with input from consumer advocates and experts—reflects significant influence from major technology companies. Industry groups lobbied heavily against the original bill, warning of compliance burdens and potential innovation slowdowns. In response, lawmakers scaled back the proposal, ultimately crafting a law that prioritizes industry flexibility over public accountability. While the new law does establish a framework for AI oversight and creates a state AI safety office, its lack of strong enforcement mechanisms and the wide-ranging exemptions mean it falls far short of what many had hoped for. As AI becomes increasingly embedded in media, advertising, and public discourse, the absence of a clear, universal disclosure rule leaves consumers vulnerable to manipulation and misinformation. In the end, California’s new AI law may offer a symbolic step forward—but for Big Tech, it delivers exactly what it wanted: a light-touch regulatory environment that preserves business autonomy while giving the appearance of action.

Related Links