Nvidia, a California-based company that invented the graphics processing unit, has unveiled a server specification that will give system manufacturers modular architecture to meet data centers’ accelerated computing needs.
The Nvidia MGX will help system makers build more than 100 server variations for artificial intelligence, high performance computing and Omniverse applications, a news release said.
MGX can reduce development costs by up to three-quarters and cut “development time by two-thirds to just six months,” the release said. Adopting MGX will be ASUS, ASRock Rack, Pegatron, GIGABYTE, QCT and Supermicro.
“As generative AI permeates across business and consumer lifestyles, building the right infrastructure for the right cost is one of network operators’ greatest challenges,” said Junichi Miyakawa, president and CEO at SoftBank Corp. “We expect that Nvidia MGX can tackle such challenges and allow for multi-use AI, 5G and more depending on real-time workload requirements.”
Manufacturers choose their GPU, DPU, and CPU with MGX, after starting with a basic system architecture that accelerates computing for their server chassis, the release said.
“Design variations can address [special] workloads, such as HPC, data science, large language models, edge computing, graphics and video, enterprise AI, and design and simulation,” the release said. “Multiple tasks, like AI training and 5G can be handled on a single machine, while upgrades to future hardware generations can be frictionless. MGX can also be easily integrated into cloud and enterprise data centers.”
The initial products will be released by QCT and Supermicro, with MGX designs following in August. The Nvidia Grace CPU Superchip will be a part of Supermicro’s ARS-221GL-NR system. QCT’s S74G-2U system will use the Nvidia GH200 Grace Hopper Superchip.
“Additionally, SoftBank Corp. plans to roll out multiple hyperscale data centers across Japan and use MGX to dynamically allocate GPU resources between generative AI and 5G applications,” Nvidia said.
Kaustubh Sanghani, vice president of GPU products at Nvidia, said enterprises want more “accelerated computing options when architecting data centers that meet their specific business and application needs.”
“We created MGX to help organizations bootstrap enterprise AI, while saving them significant amounts of time and money,” Sanghani added.