<p>➀ Gelsinger discusses the difference between throughput computing and scalar computing, highlighting NVIDIA's focus on GPU-based computing for AI.</p><p>➁ He argues that GPUs are overpriced for AI inference, suggesting a need for more cost-effective solutions.</p><p>➂ Gelsinger hints at the potential for 'NPUs' as a more efficient alternative for AI inference.</p>
Related Articles
- Input latency is the all-too-frequently missing piece of framegen-enhanced gaming performance analysis1 day ago
- Donkey Kong Bananza among Switch 2 games with no DLSS support — reviewers balk at Nintendo's aversion to technology1 day ago
- New Cooling Strategies for Future Computing2 days ago
- This RTX 5090 is cheaper than anything we saw on Prime Day and isn't even discounted — grab Zotac's triple fan beast for just $2,4993 days ago
- Micron confirms memory price hikes as AI and data center demand surges4 months ago
- Game developers urge Nvidia RTX 30 and 40 series owners rollback to December 2024 driver after recent RTX 50-centric release issues4 months ago
- Nvidia Breakfast Bytes are now available at Denny's if you want to experience the 'breakfast of geniuses'4 months ago
- The NVIDIA Rubin NVL576 Kyber Midplane is Huge4 months ago
- Blower-style RTX 4090 48GB teardown reveals dual-sided memory configuration — PCB design echoes the RTX 30904 months ago
- Contactless Timing for Paralympic Swimming4 months ago