After NVIDIA, OpenAI chooses AMD chips for ChatGPT’s future: But why?
OpenAI partners with AMD in multibillion-dollar AI compute deal
AMD gains major win as OpenAI diversifies beyond Nvidia chips
Sam Altman’s 250-gigawatt data plan drives global AI infrastructure race
It’s not every day that two of Silicon Valley’s biggest names sign a deal that could alter the physics of the AI universe. But that’s exactly what happened, when OpenAI and AMD announced a multibillion-dollar partnership to power the next generation of ChatGPT and other AI models.
SurveyUnder the terms of the deal, OpenAI will purchase up to 6 gigawatts of AMD’s Instinct GPUs – starting with the forthcoming MI450 chip in 2026 – either directly or through cloud partners. AMD, in return, gains something even more valuable from OpenAI, which is much needed validation as a major player in powering cutting-edge AI and frontier models in the cloud. Make no mistakes, this isn’t just a simple chip sale – for AMD, it’s a seat at the most exclusive table in tech. For the last few years, Nvidia has owned the AI semiconductor conversation. With this OpenAI deal, AMD finally has its breakout AI moment.
The scale of this arrangement is staggering. Six gigawatts of compute is enough to power entire hyperscale data centers across continents. Industry insiders estimate that the deal could generate tens of billions of dollars in revenue for AMD over the next five years. And in an unexpected twist, OpenAI also secured warrants to acquire up to 160 million AMD shares – which is roughly 10 percent of the company – if certain performance and deployment milestones are hit.

Also read: AMD Challenges NVIDIA with MI350 Series, ROCm 7, and Free Developer Cloud
For AMD, it’s a rare alignment of timing and opportunity. After years of living in Nvidia’s shadow, Lisa Su’s company is now supplying silicon for the very organization that arguably kicked off the AI revolution. On the other hand, for OpenAI, it’s all about powering on and matching the scale of its ambition in silicon.
OpenAI shakes hands with both Nvidia and AMD
The AMD partnership comes just weeks after OpenAI’s massive alliance with Nvidia, announced earlier in 2025. That deal was almost cinematic in its scale, in case you don’t remember, where Nvidia pledged a $100 billion investment in OpenAI, including the construction of AI data centers delivering at least 10 gigawatts of compute.
Unlike traditional chip supply contracts, Nvidia’s agreement was part infrastructure, part investment, and part bet on OpenAI’s dominance. The AMD deal, by contrast, feels more like strategic diversification. Where OpenAI’s hedging the risk of relying on one supplier, while driving down the cost of compute.

Also read: NVIDIA becomes first company ever to hit $4 trillion mark, spurred by AI
If the Nvidia partnership was OpenAI’s insurance policy, the AMD deal is its wildcard. AMD’s new MI450 chip – successor to the MI300 – has been touted as a serious contender for AI inference workloads. These are the processes that actually run models like ChatGPT once they’ve been trained. Unlike training, which demands brute-force power, inference is about efficiency – doing more with less energy and latency.
That’s where AMD has a real shot at differentiation. For OpenAI, deploying AMD chips for inference could significantly cut operational costs while freeing Nvidia hardware for training larger models. It’s a smart form of compute workload management.
Sam Altman, OpenAI’s restless CEO, has been transparent about this calculus. “If we had more GPUs,” he admitted recently, “we’d be able to handle demand surges better.” He had also famously said in July 2025 that “more compute means we can give you more AI” That line says everything about Altman’s current mindset. That the biggest constraint on AI isn’t algorithms, it’s hardware.
OpenAI’s compute arms race
With that background, you can make more sense of OpenAI’s recent moves, which read like a shopping spree across the semiconductor supply chain. Beyond Nvidia and AMD, the company has inked letters of intent with Samsung Electronics and SK Hynix to lock down the high-bandwidth memory (HBM) and DRAM needed to keep its data centers fed. Reports suggest future OpenAI facilities could consume nearly 900,000 DRAM wafers per month, accounting for as much as 40 percent of global output.
Also read: GPT-5 launched: Sam Altman’s 3 key claims on AI, AGI and India
There’s also the $10 billion partnership with Broadcom, where both companies are exploring custom AI chip development – part of Altman’s long-term strategy to “drive down the cost of compute” and reduce dependency on any single supplier. Together, these deals represent not isolated partnerships but a coordinated attempt to build an end-to-end AI compute pipeline – from GPUs to memory to networking.
For Altman, it’s not just ambition, it comes down to bare necessity – if you think about it. OpenAI’s internal projections for data center growth are almost unfathomable. An internal memo from September 2025 revealed plans to build 250 gigawatts of data center capacity by 2033 – a scale roughly equivalent to 250 nuclear power plants or just shy of 10 percent of the total power consumption of the world right now.

Even with corporate understatement, Altman called that goal “astronomical.” But there’s a logic to the madness. With AGI on the horizon, OpenAI believes compute will be the single greatest determinant of progress – and access to it, the single greatest source of inequality. As Altman warned in one forum: “Without enough infrastructure, AI will become a limited resource that wars get fought over and that becomes mostly a tool for rich people.”
The demand for AI compute is growing exponentially, and the bottleneck isn’t going away anytime soon – something that OpenAI knows acutely. The Stargate project in Texas – a half-trillion-dollar infrastructure initiative with six sites planned – was only the opening act. The AMD partnership is the next movement in that same symphony. For AMD, this is the validation moment it has chased for years in the Age of GenAI. For OpenAI, it’s another brick in the wall of a future that Altman’s racing to secure against all odds.
Jayesh Shinde
Executive Editor at Digit. Technology journalist since Jan 2008, with stints at Indiatimes.com and PCWorld.in. Enthusiastic dad, reluctant traveler, weekend gamer, LOTR nerd, pseudo bon vivant. View Full Profile