➀ NVIDIA's DGX SuperPOD architecture is designed for advanced AI model training, inference, and HPC tasks. ➁ The H100 SuperPod consists of 256 GPUs interconnected via NVLink and NVSwitch, with a reduce bandwidth of 450 GB/s. ➂ The GH200 SuperPod integrates GH200 GPUs with Grace CPUs, utilizing NVLink 4.0 for enhanced connectivity and scalability. ➃ The GB200 SuperPod, featuring GB200 GPUs and Grace CPUs, aims to support larger-scale AI workloads with a 576 GPU configuration.
Related Articles
- NVIDIA to supply 64,000 new AI GPUs for OpenAI, Oracle's new Stargate AI supercomputer in Texas8 months ago
 - Nvidia Hopper-based 100kW cluster deploys with 144 H200 GPUs — Exacluster features 192 96-core CPUs, 36TB DDR5 RAM, and 270TB of NVMe storage10 months ago
 - Google shares photos of liquid-cooled NVIDIA Blackwell GB200 NVL racks for AI cloud platformabout 1 year ago
 - NVIDIA Hits $5 Trillion Valuation As CEO Pops AI Bubble Concerns4 days ago
 - Nvidia Poised for Record $5 Trillion Market Valuation5 days ago
 - Jensen Huang personally delivers DGX Spark Mini PCs to Elon Musk and Sam Altman — separately17 days ago
 - Softbank, MS reported to be in talks with Wayve to raise $2bn21 days ago
 - CSP capex $420bn this year; $520bn next year21 days ago
 - China issues port crackdown on all Nvidia AI chip imports, says report — enforcement teams deployed to quash smuggling and investigate data center hardware, targeting H20 and RTX 6000D shipments25 days ago
 - Fujitsu and Nvidia hook up on AI agents28 days ago