<p>➀ Noam Shazeer, co-lead of Google Gemini, emphasized at Hot Chips 2025 that larger-scale computing resources (e.g., FLOPS, memory, bandwidth) are critical for advancing LLMs; </p><p>➁ Training AI models has evolved from 32 GPUs in 2015 to hundreds of thousands of accelerators today, requiring dedicated supercomputing infrastructure; </p><p>➂ Future AI hardware demands include enhanced compute density, memory hierarchy optimization, and network scalability to support increasingly complex models.</p>