➀ NVIDIA's DGX SuperPOD architecture is designed for advanced AI model training, inference, and HPC tasks. ➁ The H100 SuperPod consists of 256 GPUs interconnected via NVLink and NVSwitch, with a reduce bandwidth of 450 GB/s. ➂ The GH200 SuperPod integrates GH200 GPUs with Grace CPUs, utilizing NVLink 4.0 for enhanced connectivity and scalability. ➃ The GB200 SuperPod, featuring GB200 GPUs and Grace CPUs, aims to support larger-scale AI workloads with a 576 GPU configuration.
Related Articles
- NVIDIA to supply 64,000 new AI GPUs for OpenAI, Oracle's new Stargate AI supercomputer in Texas4 months ago
- Nvidia Hopper-based 100kW cluster deploys with 144 H200 GPUs — Exacluster features 192 96-core CPUs, 36TB DDR5 RAM, and 270TB of NVMe storage6 months ago
- Google shares photos of liquid-cooled NVIDIA Blackwell GB200 NVL racks for AI cloud platform9 months ago
- Input latency is the all-too-frequently missing piece of framegen-enhanced gaming performance analysis1 day ago
- Micron confirms memory price hikes as AI and data center demand surges4 months ago
- Contactless Timing for Paralympic Swimming4 months ago
- Fishing4 months ago
- Humanoid robots to be $4bn market by 20284 months ago
- Embedded Camera For AI-Powered Imaging4 months ago
- Ed Tackles PIP4 months ago