<p>➀ Gelsinger discusses the difference between throughput computing and scalar computing, highlighting NVIDIA's focus on GPU-based computing for AI.</p><p>➁ He argues that GPUs are overpriced for AI inference, suggesting a need for more cost-effective solutions.</p><p>➂ Gelsinger hints at the potential for 'NPUs' as a more efficient alternative for AI inference.</p>
Related Articles
- NVIDIA Hits $5 Trillion Valuation As CEO Pops AI Bubble Concerns4 days ago
 - Nvidia Poised for Record $5 Trillion Market Valuation5 days ago
 - Qualcomm launches accelerators for inference7 days ago
 - White GeForce RTX 5080 FE Mod Looks So Good NVIDIA Should Make It Official7 days ago
 - Inside Nvidia's Org Chart: See the 36 Leaders Reporting to CEO Jensen Huang8 days ago
 - Intel Reaffirms 14A, Deepens Nvidia Alliance9 days ago
 - This is the Massive NVIDIA 800G OSFP to 2x 400G QSFP112 Passive Splitter DAC Cable9 days ago
 - Intrepid modder builds Frame Warp demo from Nvidia Reflex 2 binaries — tech remains mysteriously shelved despite greatly reducing latency11 days ago
 - NVIDIA Upgrades RTX Pro 5000 Blackwell GPU With Monstrous 72GB VRAM13 days ago
 - Trump says Intel has made a 'fortune' and America has made $40 billion after the US invested in ailing chipmaker — Intel has gained $73 billion in market cap since investment14 days ago