OpenAI, the company behind ChatGPT, has signed a major $38 billion deal with Amazon Web Services (AWS) to power and scale its artificial intelligence (AI) systems. The multi-year deal will see OpenAI using AWS’s cloud infrastructure to run and expand its core AI workloads, starting immediately.
Survey
✅ Thank you for completing the survey!
Under the agreement, AWS will provide hundreds of thousands of NVIDIA GPUs and the ability to scale to tens of millions of CPUs over the next seven years. All the capacity is expected to be deployed before the end of 2026, with the ability to expand further into 2027 and beyond.
The infrastructure being built by AWS for OpenAI features a sophisticated design optimised for maximum performance and efficiency. By clustering NVIDIA GB200 and GB300 GPUs using Amazon EC2 UltraServers, AWS will enable low-latency performance across interconnected systems. This setup will allow OpenAI to handle a variety of workloads with the flexibility to adjust as its needs evolve.
“Scaling frontier AI requires massive, reliable compute,” said OpenAI co-founder and CEO Sam Altman. “Our partnership with AWS strengthens the broad compute ecosystem that will power this next era and bring advanced AI to everyone.”
Matt Garman, CEO of AWS, added, “As OpenAI continues to push the boundaries of what’s possible, AWS’s best-in-class infrastructure will serve as a backbone for their AI ambitions.”
This partnership builds on previous collaborations between the two companies. Earlier this year, OpenAI’s open weight foundation models became available on Amazon Bedrock, making them accessible to millions of AWS customers.
Ayushi works as Chief Copy Editor at Digit, covering everything from breaking tech news to in-depth smartphone reviews. Prior to Digit, she was part of the editorial team at IANS. View Full Profile