HyperAIHyperAI

Command Palette

Search for a command to run...

AI Company Transparency Hits New Low, Stanford Study Reveals: Average Score Dips to 40/100 Amid Rising Opaqueness on Data, Risks, and Environmental Impact

A new analysis reveals a sharp decline in transparency among major AI companies, with the 2025 Foundation Model Transparency Index (FMTI) showing an average score of just 40 out of 100—down from 58 in 2024. The index, produced by researchers from Stanford, Berkeley, Princeton, and MIT, evaluates 13 leading AI companies on 15 key areas including training data, model access, risk mitigation, and societal impact. The results highlight growing opacity across the industry, even as AI’s influence on the global economy and daily life continues to expand. The data shows a significant divergence in practices. A small group of top performers, led by IBM, scored 95 out of 100—the highest in the index’s history. IBM stands out for its detailed disclosures, including the ability for external researchers to replicate its training data and access to auditors. In contrast, xAI and Midjourney scored just 14 out of 100, providing no information on training data, model risks, or mitigation strategies. These two companies, along with others like OpenAI, Google, and Anthropic, are among the 10 that disclose no key data on environmental impact, such as energy use, carbon emissions, or water consumption. The 2025 edition marks the first time the index includes four new companies: Alibaba, DeepSeek, Midjourney, and xAI—three of which are from China. All four scored in the bottom half, further underscoring the global lack of transparency. The rankings have also shifted dramatically. Meta, which led in 2023, dropped from 60 to 31, while OpenAI fell from second to second-to-last. AI21 Labs, previously near the bottom, rose to first place, reflecting a major shift in transparency practices. The decline is linked to reduced public reporting. Meta did not release a technical report for its Llama 4 model, and Google faced criticism for delays in publishing a model card and technical report for Gemini 2.5, drawing scrutiny from UK lawmakers. The report also notes that the industry remains systemically opaque on four core issues: training data, training compute, model usage, and societal impact—key areas that affect the entire AI supply chain. While some open-source developers are more transparent, openness does not guarantee transparency. For example, DeepSeek, Meta, and Alibaba release model weights but withhold critical information on environmental costs, risk management, and real-world use. This distinction is vital: open access to model weights does not equate to public accountability. The findings point to a clear need for policy intervention. California and the European Union have already introduced laws requiring transparency on AI risks. Dean Ball, former White House AI adviser and co-author of the U.S. AI Action Plan, has advocated for transparency as a foundational element of responsible AI governance. The FMTI serves as a critical tool for policymakers, identifying where information is missing and where regulation is most urgently needed. Without greater transparency, oversight, risk mitigation, and public trust in AI will remain severely limited.

Related Links