<p>➀ Gelsinger discusses the difference between throughput computing and scalar computing, highlighting NVIDIA's focus on GPU-based computing for AI.</p><p>➁ He argues that GPUs are overpriced for AI inference, suggesting a need for more cost-effective solutions.</p><p>➂ Gelsinger hints at the potential for 'NPUs' as a more efficient alternative for AI inference.</p>