The third annual 2025 Foundation Model Transparency Index (FMTI), published by researchers from Stanford, UC Berkeley, Princeton, and MIT, reports a marked deterioration in corporate transparency across major foundation model developers. The Index, designed to systematically evaluate public disclosure practices across a broad set of indicators, found that the average transparency score declined to approximately 40 out of 100 in 2025, down significantly from an average of 58 in 2024.
The FMTI assesses 13 companies on disclosures relating to training data sources, risk mitigation strategies, environmental impact, computational resources, and post-deployment monitoring. It generates a composite score on a 100-point scale intended to reflect the completeness and accessibility of information relevant to users, regulators, and external researchers.
The latest findings show that transparency practices are uneven and have regressed in key areas that affect enterprise adoption, governance, and operational risk management. While a small number of companies demonstrated relatively robust disclosure practices—most notably IBM, which achieved the highest score in the Index’s history—many others showed significant declines. Some companies such as xAI and Midjourney scored in the mid-teens, indicating minimal public disclosure on critical topics including training data provenance and model risk assessments.
Several major developers saw substantial score reductions compared with prior years. For example, Meta’s score dropped sharply following the release of its Llama 4 model without a corresponding technical report, and OpenAI’s transparency score also declined year-over-year. The inclusion of new entrants to the Index, including Alibaba, DeepSeek, Midjourney, and xAI, further expanded the variability of scores, with these additions generally clustering in the lower half of the range.
The Index highlights that the industry remains “systemically opaque” on four critical areas: origin and composition of training data, total training compute, patterns of real-world usage, and the downstream societal or environmental impacts of deployed models. This opacity poses challenges for enterprise stakeholders tasked with managing risk, ensuring reliable performance, and meeting compliance obligations. Without consistent reporting on these vectors, organizations may struggle to evaluate model behavior, benchmark alternatives, or fulfill emerging regulatory requirements.
The 2025 assessment also underscores that openness—defined as releasing model weights—is not a reliable proxy for transparency. While some open models correlated with higher transparency scores, other open-weight releases offered little substantive disclosure on operational practices or risk mitigation, indicating that weight availability alone does not satisfy a broader set of enterprise transparency needs.
From a strategic perspective, the report aligns with a shifting regulatory landscape. Jurisdictions including the European Union and California have enacted or proposed legal mandates for specific types of AI transparency and risk reporting. The Index authors position the FMTI as a tool that can assist policymakers and organizational governance bodies by identifying which areas of disclosure are most resistant to voluntary improvement and may therefore require policy intervention.
Enterprises adopting AI at scale should consider these transparency trends against internal risk frameworks, vendor governance processes, and supplier due diligence practices. The decline in publicly accessible information increases the importance of contractual transparency obligations, independent audits, and robust model monitoring post-deployment. As adoption deepens across industries, the quality and availability of transparency disclosures will materially influence assessments of reliability, alignment, and compliance.