<p>➀ Gelsinger discusses the difference between throughput computing and scalar computing, highlighting NVIDIA's focus on GPU-based computing for AI.</p><p>➁ He argues that GPUs are overpriced for AI inference, suggesting a need for more cost-effective solutions.</p><p>➂ Gelsinger hints at the potential for 'NPUs' as a more efficient alternative for AI inference.</p>
Related Articles
- RTX On Handheld: GeForce NOW Levels Up Steam Deck With A Native Appabout 19 hours ago
- Nvidia and AMD to Launch Downgraded AI GPUs in China Amid Tightened U.S. Restrictionsabout 22 hours ago
- Anirudh Keynote at CadenceLIVE 2025 Reveals Millennium M20002 days ago
- Nvidia RTX 5070 vs AMD RX 9070 Face Off2 days ago
- SI forecasts 7% semi growth for 20252 days ago
- This is the NVIDIA MGX PCIe Switch Board with ConnectX-8 for 8x PCIe GPU Servers2 days ago
- Semiconductor Market Uncertainty2 days ago
- Micron confirms memory price hikes as AI and data center demand surges2 months ago
- Game developers urge Nvidia RTX 30 and 40 series owners rollback to December 2024 driver after recent RTX 50-centric release issues2 months ago
- Nvidia Breakfast Bytes are now available at Denny's if you want to experience the 'breakfast of geniuses'2 months ago