➀ Noam Shazeer, co-lead of Google Gemini, emphasized at Hot Chips 2025 that larger-scale computing resources (e.g., FLOPS, memory, bandwidth) are critical for advancing LLMs;
➁ Training AI models has evolved from 32 GPUs in 2015 to hundreds of thousands of accelerators today, requiring dedicated supercomputing infrastructure;
➂ Future AI hardware demands include enhanced compute density, memory hierarchy optimization, and network scalability to support increasingly complex models.