Amazon Web Services and OpenAI have announced a multi-year strategic partnership, representing a $38 billion commitment to expand OpenAI's compute capacity over the next seven years. The agreement provides OpenAI with immediate access to AWS infrastructure for running and scaling core artificial intelligence workloads.
Under the partnership, AWS will provide OpenAI with Amazon EC2 UltraServers featuring hundreds of thousands of state-of-the-art NVIDIA GPUs, including GB200s and GB300s models. The infrastructure also includes the ability to scale to tens of millions of CPUs for agentic workloads. OpenAI will immediately start utilising AWS compute, with all capacity targeted for deployment before the end of 2026 and expansion capability extending into 2027 and beyond.
The infrastructure deployment features clustering of NVIDIA GPUs via Amazon EC2 UltraServers on the same network to enable low-latency performance across interconnected systems. The clusters are designed to support various workloads, from serving inference for ChatGPT to training next generation models, with flexibility to adapt to OpenAI's evolving needs. AWS has experience running large-scale AI infrastructure with clusters topping 500,000 chips.
"Scaling frontier AI requires massive, reliable compute," said Sam Altman, OpenAI co-founder and CEO. "Our partnership with AWS strengthens the broad compute ecosystem that will power this next era and bring advanced AI to everyone."
Matt Garman, CEO of AWS, stated: "As OpenAI continues to push the boundaries of what's possible, AWS's best-in-class infrastructure will serve as a backbone for their AI ambitions. The breadth and immediate availability of optimised compute demonstrates why AWS is uniquely positioned to support OpenAI's vast AI workloads."
The partnership extends existing collaboration between the companies. Earlier this year, OpenAI open weight foundation models became available on Amazon Bedrock, where OpenAI quickly became one of the most popular publicly available model providers. Thousands of customers—including Bystreet, Comscore, Peloton, Thomson Reuters, Triomics, and Verana Health—are working with OpenAI models for agentic workflows, coding, scientific analysis, and mathematical problem-solving.
The $38 billion infrastructure commitment addresses the demand for computing power driven by rapid advancement. The immediate availability of compute resources and phased deployment timeline through 2026 and beyond enables OpenAI to scale frontier model development while maintaining operational flexibility. The partnership strengthens OpenAI's position to deliver advanced AI capabilities to millions of ChatGPT users through AWS's cloud infrastructure platform.