OpenAI is building its own AI chip jointly with Broadcom: Deal is worth billions

Updated on 14-Oct-2025
HIGHLIGHTS

OpenAI partners with Broadcom to design billion-dollar custom AI chips

Broadcom collaboration helps OpenAI reduce reliance on Nvidia GPUs

OpenAI’s in-house chip project marks new phase in AI infrastructure

In a bold move to redefine the future of AI infrastructure, OpenAI has entered into a multibillion-dollar partnership with chipmaker Broadcom to design and manufacture its own artificial intelligence processors. The deal marks one of the most significant shifts in OpenAI’s hardware strategy to date, signaling its intent to reduce dependence on Nvidia’s GPUs – the dominant force powering AI training worldwide.

According to The Information, the collaboration involves not just Broadcom but also Arm, the chip design company owned by SoftBank, to integrate custom processing cores optimized for OpenAI’s large-scale models. The project, internally referred to as a cornerstone of OpenAI’s “compute independence” plan, could begin deployment as early as 2026, with scaling expected through the decade.

Also read: Google updates NotebookLM with new features: How to use

The race to control AI compute

Over the past year, OpenAI’s rapid product expansion – from ChatGPT to advanced multimodal models – has strained access to GPU capacity. Nvidia’s H100 chips remain the gold standard for training large language models, but global shortages and soaring costs have made self-reliance an attractive, if daunting, proposition.

By teaming up with Broadcom, one of the world’s largest semiconductor manufacturers, OpenAI gains access to cutting-edge chip design and production capabilities. Broadcom has a long history of fabricating advanced accelerators and networking hardware, which could prove crucial as OpenAI looks to build datacenters with tens of thousands of interconnected AI chips.

Inside the deal

While neither company disclosed financial terms, industry estimates peg the partnership’s value at several billion dollars over multiple years, placing it on par with the scale of OpenAI’s existing cloud contracts with Microsoft.

Broadcom will reportedly handle fabrication, testing, and integration, while OpenAI will take the lead on chip architecture and optimization for its AI workloads. The chips, expected to rival or exceed Nvidia’s next-generation offerings, could power OpenAI’s in-house training clusters and possibly Azure-based systems that serve enterprise clients.

The inclusion of Arm in the collaboration hints at a hybrid compute design, where Arm CPUs work alongside AI accelerators. Such an approach mirrors trends seen in high-performance AI servers, where flexible CPU cores manage orchestration, memory allocation, and data routing while accelerators handle dense matrix computation.

Also read: Panther Lake: 2026 Intel laptops to have faster GPU, better AI and battery

Strategic implications

This announcement extends a broader pattern among AI leaders seeking to secure hardware independence. Google’s Tensor Processing Units (TPUs) already power most of its AI workloads, while Meta and Amazon have both invested heavily in their own chips to cut costs and tailor performance.

For OpenAI, however, the move carries an added dimension, reducing exposure to Nvidia, whose dominance in the GPU market gives it immense pricing power. Custom chips could offer OpenAI significant savings and deeper control over efficiency, latency, and performance per watt.

Still, the path is fraught with challenges. Designing, validating, and manufacturing chips at scale can take years and cost billions in R&D. Even for giants like Apple or Google, achieving competitive yields and software compatibility remains complex. Analysts caution that any misstep in execution could delay OpenAI’s rollout or blunt its competitive edge.

A long-term vision for AI infrastructure

OpenAI CEO Sam Altman has long emphasized that access to massive compute will define who leads the next era of AI. Reports suggest OpenAI aims to deploy 10 gigawatts of AI accelerator capacity by 2029, making it one of the largest compute buildouts ever attempted by a private company.

If successful, this partnership could transform OpenAI from a model-building lab into a full-stack AI ecosystem, spanning chips, data centers, and software. It would also position the company to compete directly not just in AI models, but in the foundational hardware layer that powers them.

As AI systems become more multimodal, real-time, and embodied in devices and robots, OpenAI’s push into chipmaking could mark the start of a new era: one where intelligence isn’t just written in code, but etched into silicon.

Also read: What is MAI-Image-1: Microsoft’s first in-house text-to-image generation AI model

Vyom Ramani

A journalist with a soft spot for tech, games, and things that go beep. While waiting for a delayed metro or rebooting his brain, you’ll find him solving Rubik’s Cubes, bingeing F1, or hunting for the next great snack.

Connect On :