OpenAI’s projected long-term cash burn has sharpened growing concerns about the AI bubble and the economic sustainability of frontier AI development. While precise forward projections vary, multiple outlets and financial commentators, including The Financial Times, have highlighted internal forecasts suggesting cumulative funding needs well above $200 billion by 2030, driven primarily by escalating compute, infrastructure, and inference costs.
In September 2025, sources, including The Information and Reuters, reported on OpenAI’s increased cash burn expectation through to 2029, which jumped up $80 billion to land on $115 billion.
Last week, CNBC reported that OpenAI has reined in its infrastructure commitments to a planned total compute spend of $600 billion by 2030, having previously touted a figure of $1.4 trillion, with target revenue of $280 billion for the same period.
The Financial Times, based on data and projections from HSBC and FTAV, predicts that by 2030, OpenAI will be operating at a loss of $76.46 billion per annum, even with projected revenue of $213.59 billion (as reported in November 2025). In other words, even significant top-line growth may not be enough to offset the scale of infrastructure investment.
OpenAI relies on external investment to fuel continued innovation. Projections of funding needs over $200 billion nod to the capital required to finance operating burn, infrastructure build-out, and long-term compute commitments through the end of the decade. Continued access to capital requires a compelling long-term return narrative to keep investors on side. But funding pressures could tip the scales in favor of competitors.
This dynamic is underscored by today’s announcement (27 February 2026) of a further $110 billion in new investment in OpenAI at a $730 billion pre-money valuation, including $30 billion from SoftBank, $30 billion from NVIDIA, and $50 billion from Amazon. This comes alongside an OpenAI and Amazon multi-year strategic partnership, described as a move to “accelerate AI innovation for enterprises, startups, and end consumers around the world.”
The agreement also expands OpenAI’s existing $38 billion multi-year arrangement with AWS by an additional $100 billion over eight years. As part of the expansion, OpenAI has committed to consuming approximately two gigawatts of Trainium capacity through AWS infrastructure to support stateful runtime, frontier, and other advanced workloads. OpenAI says the arrangement will lower costs and improve efficiency at scale, though it also reinforces the magnitude of the capital required to sustain frontier AI development.
The sustainability of the current capital structure underpinning these rapidly advancing capabilities is questionable, especially in the face of Google Gemini, for example, which has deep pockets to dig from, supported by other internal revenue streams.
Gemini does not need to be instantly profitable as a standalone product. Instead, Google has embedded it across Search, Workspace, Android, developer tooling, and Google Cloud.
This “full stack” strategy allows AI to reinforce existing revenue engines rather than rely solely on new ones. Gemini Enterprise and subsequent releases demonstrate that Google’s approach is integrative. AI is layered into products that already monetize successfully.
That diversification gives Google strategic breathing room. If pricing compresses or inference costs remain elevated, Gemini can be subsidized internally. OpenAI does not enjoy the same insulation.
While OpenAI rocketed to success with the release of ChatGPT back in 2022, Gemini arguably now has the upper hand with financial resources readily available. This arms race poses a threat to OpenAI’s ability to compete.
It was widely reported back in December of 2025 that OpenAI CEO Sam Altman had declared a “code red” in an internal staff memo to improve ChatGPT, rattled by the capabilities that came with the launch of Google’s Gemini 3, which outpaced many benchmarks. The episode underscored the competitive pressure OpenAI felt to stay ahead.
OpenAI’s strategy has increasingly emphasized enterprise adoption, particularly with its recent launch of Frontier and Frontier Alliance. Yet even optimistic revenue forecasts must contend with an underlying economic reality: enterprise AI contracts, while large, may not scale fast enough to match compute expansion.
The Financial Times also reported that enterprise AI is expected to generate $386 billion in annual revenue by 2030. But it estimates OpenAI’s market share at 37%, down from 50% at the end of 2025.
Industry analysts have observed that while enterprise AI spending is growing, many deployments remain experimental or focused on productivity gains rather than transformative revenue generation. Broader consulting research from firms such as McKinsey has consistently shown that while AI pilots are widespread, measurable financial impact at scale remains uneven across industries.
This leaves uncertainty over the speed of uptick needed to float financially, and whether revenue growth can realistically keep pace with infrastructure commitments.