OpenAI and Broadcom have announced a strategic collaboration to deploy 10 gigawatts of custom AI accelerators designed by OpenAI and developed in partnership with Broadcom. The multi-year partnership enables OpenAI to embed learnings from frontier model development directly into hardware architecture, with rack deployments targeted to begin in the second half of 2026 and complete by the end of 2029.
OpenAI will design the accelerators and systems, which Broadcom will co-develop and deploy. The companies have signed a term sheet to deploy racks incorporating the AI accelerators and Broadcom networking solutions, scaled entirely with Ethernet and connectivity solutions from Broadcom. Deployments will occur across OpenAI's facilities and partner data centres to meet what the companies characterise as surging global demand for AI infrastructure.
"Partnering with Broadcom is a critical step in building the infrastructure needed to unlock AI's potential and deliver real benefits for people and businesses," said Sam Altman, co-founder and CEO of OpenAI. "Developing our own accelerators adds to the broader ecosystem of partners all building the capacity required to push the frontier of AI to provide benefits to all humanity."
Hock Tan, President and CEO of Broadcom, stated the collaboration "signifies a pivotal moment in the pursuit of artificial general intelligence," noting that "OpenAI has been in the forefront of the AI revolution since the ChatGPT moment."
OpenAI co-founder and President Greg Brockman explained the strategic rationale: "By building our own chip, we can embed what we've learned from creating frontier models and products directly into the hardware, unlocking new levels of capability and intelligence."
The deployment architecture utilises Broadcom's portfolio of Ethernet, PCIe and optical connectivity solutions for scale-up and scale-out networking. Charlie Kawwas, President of Broadcom's Semiconductor Solutions Group, noted that "custom accelerators combine remarkably well with standards-based Ethernet scale-up and scale-out networking solutions to provide cost and performance optimised next generation AI infrastructure."
The collaboration addresses OpenAI's infrastructure requirements as the company reports over 800 million weekly active users and adoption across global enterprises, small businesses and developers. Custom accelerator design enables OpenAI to optimise hardware specifically for its model architectures and product requirements, potentially reducing dependency on third-party GPU supply chains. The Ethernet-based networking approach represents a technical architecture decision for AI cluster connectivity at datacenter scale.
OpenAI's move into custom silicon design represents vertical integration into the hardware layer, following patterns established by hyperscale cloud providers. The 10-gigawatt deployment scale and four-year timeline indicate substantial capital commitment to proprietary infrastructure. The partnership with Broadcom provides semiconductor design and manufacturing expertise while maintaining OpenAI's architectural control. The collaboration reinforces Ethernet positioning for AI datacentre networking versus alternative interconnect technologies. For enterprises evaluating AI infrastructure investments, the timeline suggests OpenAI's custom hardware will influence model deployment architectures beginning in late 2026.