OpenAI enters chip manufacturing with Broadcom to scale AI systems

Subscribe to our free newsletter today to keep up to date with the latest manufacturing news.

OpenAI is expanding its footprint from AI software into hardware with a major chip manufacturing partnership announced with Broadcom. The multi-year collaboration will see the development and deployment of custom AI accelerator racks built to deliver up to 10 gigawatts of computing capacity—positioning OpenAI not just as a leading developer of AI models, but as an emerging force in infrastructure design and production.

The move signals a growing shift among AI leaders toward vertical integration, where control over chip design and data center architecture becomes critical to sustaining performance at scale. For OpenAI, it’s also a hedge against the rising costs and supply constraints of relying solely on third-party chipmakers like Nvidia and AMD. With Broadcom’s manufacturing capabilities and networking portfolio, OpenAI gains access to an end-to-end solution that includes Ethernet, PCIe, and optical technologies—all optimized for high-efficiency AI workloads.

A 10-gigawatt play in AI infrastructure

Racks powered by OpenAI-designed accelerators will begin deployment in 2026, with completion targeted for 2029. By embedding learnings from its frontier AI models directly into the hardware layer, OpenAI aims to reduce latency, boost performance, and streamline system efficiency across its infrastructure. The systems will be scaled entirely with Broadcom’s Ethernet technologies, enabling both scale-up and scale-out flexibility across OpenAI’s data centers and partner facilities.

The impact of this deal extends well beyond Silicon Valley. For the manufacturing sector, the partnership is a signal that chip design and production are no longer the exclusive domain of traditional semiconductor firms. As AI companies grow in scale and ambition, they are increasingly investing in tailored hardware pipelines, driving new demand for advanced manufacturing, supply chain coordination, and rack-level integration.

Manufacturing gains from AI’s hardware evolution

For Broadcom, the agreement reinforces its position at the center of AI infrastructure buildout. The company’s semiconductor division is tasked with co-developing the accelerators alongside OpenAI, marking a milestone in the production of application-specific chips for artificial intelligence. The collaboration underscores the growing importance of custom accelerators and the strategic shift toward Ethernet as the dominant networking standard in AI clusters.

OpenAI’s leadership emphasized that this is not merely a technology decision but a foundational move to support long-term growth. With more than 800 million weekly active users and growing adoption across enterprises, OpenAI’s infrastructure needs have evolved rapidly. Custom hardware allows it to optimize for power, performance, and scale simultaneously, at a time when the economics of AI computing are under increasing scrutiny.

While the scale of OpenAI’s ambition has raised eyebrows in financial circles, the decision to manufacture chips signals a pragmatic response to market dynamics. As AI accelerates, so does the need for robust, flexible, and efficient hardware infrastructure. If successful, the Broadcom partnership could serve as a blueprint for other AI firms looking to control more of their hardware destiny.

Sources:

Open AI