➀ NVIDIA's DGX SuperPOD architecture is designed for advanced AI model training, inference, and HPC tasks. ➁ The H100 SuperPod consists of 256 GPUs interconnected via NVLink and NVSwitch, with a reduce bandwidth of 450 GB/s. ➂ The GH200 SuperPod integrates GH200 GPUs with Grace CPUs, utilizing NVLink 4.0 for enhanced connectivity and scalability. ➃ The GB200 SuperPod, featuring GB200 GPUs and Grace CPUs, aims to support larger-scale AI workloads with a 576 GPU configuration.