Server Vendors Unveil AI-Driven Data Center Systems at COMPUTEX 2024

  • Home
  • Industry News
  • Server Vendors Unveil AI-Driven Data Center Systems at COMPUTEX 2024
DateJun 2, 2024

NVIDIA and leading server manufacturers have unveiled a series of systems powered by the NVIDIA Blackwell architecture, featuring Grace CPUs and advanced networking infrastructure. This move is set to enable enterprises to build AI factories and data centers, driving the next wave of generative AI breakthroughs.

During his keynote at COMPUTEX 2024, NVIDIA founder and CEO Jensen Huang has announced that top server manufacturers such as ASRock Rack, ASUS, GIGABYTE, Ingrasys, Inventec, Pegatron, QCT, Supermicro, Wistron, and Wiwynn will deliver a range of AI systems. These systems, utilizing NVIDIA GPUs and networking, will cater to cloud, on-premises, embedded, and edge AI applications.

Mr. Huang declared, “The next industrial revolution has begun. Companies and countries are partnering with NVIDIA to shift the trillion-dollar traditional data centers to accelerated computing and build a new type of data center – AI factories – to produce a new commodity: artificial intelligence.” He emphasized that the entire industry, from server and networking manufacturers to software developers, is preparing for Blackwell to accelerate AI-powered innovation across all fields.

These new offerings would cater to a variety of applications, featuring configurations from single to multi-GPU setups, x86- to Grace-based processors, and air- to liquid-cooling technologies. Additionally, to expedite the development of systems in various sizes and configurations, the NVIDIA MGX modular reference design platform now supports Blackwell products. This includes the new NVIDIA GB200 NVL2 platform, designed to deliver unmatched performance for mainstream large language model inference, retrieval-augmented generation, and data processing.

Accelerated Computing Needs of Data Centers

The GB200 NVL2 is tailored for emerging market opportunities, such as data analytics, where companies invest tens of billions of dollars annually. The platform leverages high-bandwidth memory performance through NVLink-C2C interconnects and dedicated decompression engines in the Blackwell architecture, accelerating data processing by up to 18 times with 8 times better energy efficiency compared to x86 CPUs.

To address the diverse accelerated computing needs of global data centers, NVIDIA MGX provides a reference architecture that allows server manufacturers to build over 100 system design configurations “quickly and cost-effectively.” Manufacturers can start with a basic system architecture for their server chassis and then select their GPU, DPU, and CPU to meet different workload requirements. Currently, more than 90 systems from over 25 partners are either released or in development, a significant increase from the 14 systems from six partners the previous year. The MGX architecture helps reduce development costs by up to three-quarters and shortens development time by two-thirds, to just six months.

Intel and AMD

Both AMD and Intel are supporting the MGX architecture, with plans to introduce their own CPU host processor module designs. This includes AMD’s next-generation Turin platform and Intel’s Xeon 6 processor with P-cores (formerly codenamed Granite Rapids). These reference designs allow any server system builder to save development time while ensuring consistency in design and performance.

NVIDIA’s latest platform, the GB200 NVL2, leverages MGX and Blackwell, offering a scale-out, single-node design that enables various system configurations and networking options. This ensures seamless integration of accelerated computing into existing data center infrastructure. The GB200 NVL2 is part of the Blackwell product lineup, which also includes Blackwell Tensor Core GPUs, GB200 Grace Blackwell Superchips, and the GB200 NVL72.

NVIDIA’s extensive partner ecosystem includes TSMC, the world’s leading semiconductor manufacturer and an NVIDIA foundry partner, as well as global electronics makers that provide key components to create AI factories. These include innovations in server racks, power delivery, cooling solutions, and more from companies such as Amphenol, Asia Vital Components (AVC), Cooler Master, Colder Products Company (CPC), Danfoss, Delta Electronics, and LITEON.

This collaborative effort would allow for the rapid development and deployment of new data center infrastructure to meet the needs of global enterprises. The infrastructure is further accelerated by Blackwell technology, NVIDIA Quantum-2 or Quantum-X800 InfiniBand networking, NVIDIA Spectrum-X Ethernet networking, and NVIDIA BlueField-3 DPUs in servers from leading systems makers Dell Technologies, Hewlett Packard Enterprise, and Lenovo. Enterprises can also access the NVIDIA AI Enterprise software platform, which includes NVIDIA NIM inference microservices, to create and run production-grade generative AI applications.

Blackwell Technology in Taiwan

Mr. Huang also highlighted that Taiwan’s leading companies are rapidly adopting Blackwell technology to enhance their AI capabilities. Chang Gung Memorial Hospital, Taiwan’s premier medical center, plans to use the NVIDIA Blackwell computing platform to advance biomedical research and accelerate imaging and language applications, improving clinical workflows and patient care.

Foxconn, one of the world’s largest electronics manufacturers, is set to use NVIDIA Grace Blackwell to develop smart solution platforms for AI-powered electric vehicles and robotics. They also aim to expand language-based generative AI services to provide more personalized customer experiences.

NVIDIA’s latest advancements and partnerships would signify a significant leap in the AI-driven transformation of data centers, poised to revolutionize various industries worldwide.

“ASUS is working with NVIDIA to take enterprise AI to new heights with our powerful server lineup, which we’ll be showcasing at COMPUTEX,” said Jonney Shih, Chairman at ASUS. “Using NVIDIA’s MGX and Blackwell platforms, we’re able to craft tailored data center solutions built to handle customer workloads across training, inference, data analytics and HPC.”

“Our building-block architecture and rack-scale, liquid-cooling solutions, combined with our in-house engineering and global production capacity of 5,000 racks per month, enable us to quickly deliver a wide range of game-changing NVIDIA AI platform-based products to AI factories worldwide,” said Charles Liang, president and CEO at Supermicro. “Our liquid-cooled or air-cooled high-performance systems with rack-scale design, optimized for all products based on the NVIDIA Blackwell architecture, will give customers an incredible choice of platforms to meet their needs for next-level computing, as well as a major leap into the future of AI.”

Leave a Reply