Jensen Huang Founder, President and CEO at NVIDIA | Official website
NVIDIA has announced its contribution of key elements from its Blackwell accelerated computing platform design to the Open Compute Project (OCP). This move aims to foster innovation in AI infrastructure by promoting open, efficient, and scalable data center technologies. The announcement was made at the OCP Global Summit.
The company is sharing critical parts of the NVIDIA GB200 NVL72 system's electro-mechanical design with the OCP community. This includes rack architecture, compute and switch tray mechanicals, liquid-cooling and thermal environment specifications, as well as NVIDIA NVLink cable cartridge volumetrics. These contributions are intended to support increased compute density and networking bandwidth.
NVIDIA has a history of contributing to OCP across various hardware generations. Its past contributions include the NVIDIA HGX H100 baseboard design specification. The goal is to provide a broader range of offerings from global computer manufacturers and encourage wider adoption of AI technologies.
The expanded alignment of NVIDIA Spectrum-X Ethernet networking platform with OCP Community-developed specifications allows companies deploying OCP-recognized equipment to enhance AI factory performance while preserving investments and maintaining software consistency.
"Building on a decade of collaboration with OCP, NVIDIA is working alongside industry leaders to shape specifications and designs that can be widely adopted across the entire data center," said Jensen Huang, founder and CEO of NVIDIA. "By advancing open standards, we’re helping organizations worldwide take advantage of the full potential of accelerated computing and create the AI factories of the future."
The GB200 NVL72 system features a modular architecture based on NVIDIA MGX™, which helps computer makers build diverse data center infrastructure designs efficiently. It connects 36 NVIDIA Grace CPUs and 72 Blackwell GPUs in a rack-scale design for significant computational power.
NVIDIA's Spectrum-X Ethernet networking platform now supports next-generation standards like SAI and SONiC, allowing customers to use adaptive routing for enhanced Ethernet performance in scale-out AI infrastructures. ConnectX-8 SuperNICs will offer accelerated networking up to 800Gb/s starting next year.
As data centers evolve towards more complex AI computing needs, NVIDIA collaborates with over 40 global electronics makers providing essential components for building AI factories. Partners like Meta are also innovating on top of the Blackwell platform; Meta plans to contribute its Catalina AI rack architecture based on GB200 NVL72 to OCP.
"NVIDIA has been a significant contributor to open computing standards for years," said Yee Jiun Song, vice president of engineering at Meta. "As we progress to meet increasing computational demands...NVIDIA’s latest contributions in rack design and modular architecture will help speed up development."
More details about these contributions can be found at the 2024 OCP Global Summit held at San Jose Convention Center from Oct. 15-17.