NVIDIA open-sourced MRC, the RDMA transport protocol powering OpenAI's Blackwell clusters. The protocol spreads connections across 64 network paths, rerouting traffic in under a microsecond when paths fail.
Key facts
- MRC spreads connections across up to 64 network paths.
- Traffic reroutes in hardware within microseconds.
- OpenAI uses MRC on Blackwell clusters.
- Microsoft and Oracle are named as major deployments.
- NVIDIA opened MRC through the Open Compute Project.
NVIDIA open-sourced MRC, a multi-path RDMA transport protocol designed for massive AI training clusters, according to a post by @kimmonismus. The protocol, already deployed in OpenAI's Blackwell clusters, spreads a single connection across multiple network paths—up to 64, per NVIDIA's documentation—enabling hardware-level traffic rerouting within microseconds when a path fails or becomes congested.
The Network Bottleneck
This matters because frontier training is no longer only about GPUs. The network is becoming one of the biggest bottlenecks in AI factories. As cluster sizes grow to tens of thousands of GPUs, single-path RDMA (Remote Direct Memory Access) links create fragile chokepoints. MRC's multi-path design mirrors techniques used in data-center TCP (e.g., MPTCP) but implements them in hardware at the transport layer, avoiding the latency overhead of software-based retransmission.
Deployment and Adoption
OpenAI is already using MRC on Blackwell clusters. Microsoft and Oracle are also named by NVIDIA as major deployments [per @kimmonismus]. The protocol is optimized for NVIDIA's Spectrum-X Ethernet platform, but by opening it through the Open Compute Project (OCP), NVIDIA is pushing Ethernet into territory historically associated with InfiniBand—NVIDIA's own higher-performance interconnect.
The Strategic Play
This is a classic NVIDIA platform move: more open standard on the surface, stronger full-stack NVIDIA advantage underneath. By open-sourcing MRC through OCP, NVIDIA gets the credibility of an open standard while ensuring that the protocol is first optimized for its own Spectrum-X hardware. Competitors like Broadcom and Intel, who also target AI Ethernet fabrics, will need to implement MRC-compatible endpoints or risk losing compatibility with the largest AI clusters. The unique take: this is not just a protocol release—it's a moat-builder disguised as an open-source contribution, locking in the networking layer of AI factories just as CUDA locked in the compute layer.
What to watch

Watch for Broadcom and Intel to announce MRC-compatible Ethernet endpoints in the next 6–9 months. Also track whether OCP ratifies MRC as a standard—if it does, NVIDIA's networking moat deepens; if not, the protocol remains a de facto standard limited to NVIDIA's ecosystem.









