<p>➀ Cerebras Systems' wafer-scale AI chip (WSE-3) outperforms the fastest GPUs by 57 times in executing DeepSeek-R1 models with 70 billion parameters.</p><p>➁ Cerebras CEO Andrew Feldman states that enterprise clients are highly enthusiastic about DeepSeek's new R1 inference model, with a surge in demand within ten days of its launch.</p><p>➂ The WSE-3 chip, made on a 12-inch wafer with TSMC's 5nm process, has 4 trillion transistors, 900,000 AI cores, 44GB on-chip SRAM, and a total memory bandwidth of 21PB/s, with a peak performance of 125 FP16 PetaFLOPS.</p><p>➃ DeepSeek-R1 offers performance comparable to advanced inference models from OpenAI at a low training cost and has been open-sourced, allowing tech firms to build AI applications and chip manufacturers to optimize for the model.</p><p>➄ Andrew Feldman emphasizes that while DeepSeek poses some risks, users should exercise basic judgment, as seen with the use of electric saws.</p>
Related Articles
- Nvidia reportedly developing new AI chip for China that meets export controls – B30 could include NVLink for creation of high-performance clustersabout 13 hours ago
- More affordable Strix Halo model emerges — Early Ryzen AI Max 385 Geekbench result reveals an eight-core option1 day ago
- Power-Saving Microcontrollers For IoT2 months ago
- System-On-Module For Edge AI Computing2 months ago
- DNA To Build 3D Electronic Devices2 months ago
- Optoelectronics Silicon For AI Interconnect2 months ago
- Contactless Timing for Paralympic Swimming2 months ago
- Nvidia's Jesnen Huang expects GAA-based technologies to bring a 20% performance uplift2 months ago
- Powerful MCUs For Smart, Multi-Use AI2 months ago
- AI Studio Improves SoC Designer Productivity By 10X2 months ago