NVIDIA Shares Blackwell Platform Design for Open Hardware Ecosystem

NVIDIA is sharing key portions of the NVIDIA GB200 NVL72 system electro-mechanical design with the OCP community.

NVIDIA is sharing key portions of the NVIDIA GB200 NVL72 system electro-mechanical design with the OCP community.

NVIDIA shares its contributions at the Open Compute Project at the 2024 OCP Global Summit. Image courtesy of NVIDIA.


To enhance development of open, scalable data center technologies, NVIDIA reports that it has given foundational elements of its NVIDIA Blackwell accelerated computing platform design to the Open Compute Project (OCP) and expanded NVIDIA Spectrum-X support for OCP standards.

NVIDIA is sharing key portions of the NVIDIA GB200 NVL72 system electro-mechanical design with the OCP community, including rack architecture, compute and switch tray mechanicals, liquid-cooling and thermal environment specifications, and NVIDIA NVLink cable cartridge volumetrics.

“Building on a decade of collaboration with OCP, NVIDIA is working alongside industry leaders to shape specifications and designs that can be widely adopted across the entire data center,” says Jensen Huang, founder and CEO of NVIDIA. “By advancing open standards, we’re helping organizations worldwide take advantage of the full potential of accelerated computing and create the AI factories of the future.”

Accelerated Computing Platform

GB200 NVL72 is based on the NVIDIA MGX modular architecture, which enables computer makers to build an array of data center infrastructure designs.

The liquid-cooled system connects 36 NVIDIA Grace CPUs and 72 NVIDIA Blackwell GPUs in a rack-scale design. With a 72-GPU NVIDIA NVLink domain, it acts as a single GPU and delivers faster real-time trillion-parameter large language model inference than the NVIDIA H100 Tensor Core GPU.

The NVIDIA Spectrum-X Ethernet networking platform, which includes the next-generation NVIDIA ConnectX-8 SuperNIC, supports OCP’s Switch Abstraction Interface (SAI) and Software for Open Networking in the Cloud (SONiC) standards. This enables use of Spectrum-X’s adaptive routing and telemetry-based congestion control to accelerate Ethernet performance for scale-out AI infrastructure.

ConnectX-8 SuperNICs feature accelerated networking at speeds of up to 800Gb/s and programmable packet processing engines optimized for large-scale AI workloads. ConnectX-8 SuperNICs for OCP 3.0 will be available next year.

Infrastructure for Data Centers

To date, NVIDIA is working closely with 40+ global electronics makers that provide key components to create AI factories.

Learn more about NVIDIA’s contributions to the Open Compute Project at the 2024 OCP Global Summit, taking place at the San Jose Convention Center from Oct. 15-17.

Sources: Press materials received from the company and additional information gleaned from the company’s website.

More NVIDIA Coverage

Share This Article

Subscribe to our FREE magazine, FREE email newsletters or both!

Join over 90,000 engineering professionals who get fresh engineering news as soon as it is published.


About the Author

DE Editors's avatar
DE Editors

DE’s editors contribute news and new product announcements to Digital Engineering.
Press releases may be sent to them via [email protected].

Follow DE
#29494