<p>➀ Gelsinger discusses the difference between throughput computing and scalar computing, highlighting NVIDIA's focus on GPU-based computing for AI.</p><p>➁ He argues that GPUs are overpriced for AI inference, suggesting a need for more cost-effective solutions.</p><p>➂ Gelsinger hints at the potential for 'NPUs' as a more efficient alternative for AI inference.</p>
Related Articles
- OpenAI Moves Into Chip Design With Broadcom as Mass Production Targeted for 20263 days ago
- Lenovo's IFA launches include 4K 240 Hz OLED gaming monitors — Legion Pro 7 laptops features Ryzen 9 9955HX3D with an RTX 50803 days ago
- Quantinuum raises $600m3 days ago
- 'Doomer science fiction': Nvidia criticizes proposed US bill that would force it to give American buyers 'first option' in AI GPU purchases before selling chips to other countries, including allies — GAIN AI Act debuts in defense spending bill3 days ago
- A Big Step Forward to Limit AI Power Demand13 days ago
- Semiconductors Still Strong24 days ago
- Deploying AMD Instead of Arm in our Infrastructure 2025 Here is Whyabout 1 month ago
- XeSS SDK 2.1 release opens up Intel's framegen tech to compatible AMD and Nvidia GPUs — Xe Low Latency also goes cross-platform if framegen is enabledabout 1 month ago
- Battlefield 6 requirements suggest it'll run on surprisingly modest PCs — EA VP says it still won't work on Steam Deck, thoughabout 1 month ago
- NVIDIA Extends Windows 10 Driver Support For Some GPUs, Puts Others Out To Pastureabout 1 month ago