OpenAI partners with Nvidia, Microsoft and others to build MRC: What it is 

HIGHLIGHTS

OpenAI has announced a new networking protocol called MRC.

MRC stands for Multipath Reliable Connection.

As OpenAI explains, MRC is a 'novel protocol that improves GPU networking performance and resilience in large training clusters.'

OpenAI partners with Nvidia, Microsoft and others to build MRC: What it is 

OpenAI has announced a new networking protocol called MRC, developed in partnership with major tech companies including Nvidia, Microsoft, AMD, Intel and Broadcom. The company says the new system is designed to make AI supercomputer networks faster, more reliable and more efficient while training advanced AI models.

Digit.in Survey
✅ Thank you for completing the survey!

OpenAI says more than 900 million people use ChatGPT every week. To support such large-scale AI systems, the company believes it needs a type of networking technology that can move huge amounts of data between GPUs without delays or failures slowing things down.

In a blogpost, OpenAI said, ‘Our goal was not just to build a fast network, but also to build one that delivers very predictable performance, even in the presence of failures, to keep training jobs moving.’

Also read: Apple to invest Rs 100 crore in India and it is not for iPhones or Macs 

What is MRC

MRC stands for Multipath Reliable Connection. As OpenAI explains, MRC is a ‘novel protocol that improves GPU networking performance and resilience in large training clusters.’ It is built into the latest 800Gb/s network interfaces. The technology helps GPUs communicate with each other more efficiently during AI training.

Normally, AI training systems send data through a single network path. If that path gets congested or fails, training can slow down or even stop completely. OpenAI says MRC changes this by spreading data packets across hundreds of different paths at the same time. This reduces congestion and helps the system quickly avoid failed connections.

Also read: Apple agrees to pay USD 250 million to iPhone buyers over AI claims: Who can claim  

The company explained that training large AI models can involve millions of data transfers in a single step. Even one delayed transfer can cause GPUs to sit idle, wasting time and computing power.

MRC is already being used in OpenAI’s largest Nvidia GB200 supercomputers. OpenAI has also shared the MRC specification through the Open Compute Project so that others can use it.

Also read: OpenAI rolls out GPT-5.5 Instant for more reliable responses: Features, availability and how to access  

Ayushi Jain

Ayushi Jain

Ayushi works as Chief Copy Editor at Digit, covering everything from breaking tech news to in-depth smartphone reviews. Prior to Digit, she was part of the editorial team at IANS. View Full Profile