<p>➀ Cerebras Systems' wafer-scale AI chip (WSE-3) outperforms the fastest GPUs by 57 times in executing DeepSeek-R1 models with 70 billion parameters.</p><p>➁ Cerebras CEO Andrew Feldman states that enterprise clients are highly enthusiastic about DeepSeek's new R1 inference model, with a surge in demand within ten days of its launch.</p><p>➂ The WSE-3 chip, made on a 12-inch wafer with TSMC's 5nm process, has 4 trillion transistors, 900,000 AI cores, 44GB on-chip SRAM, and a total memory bandwidth of 21PB/s, with a peak performance of 125 FP16 PetaFLOPS.</p><p>➃ DeepSeek-R1 offers performance comparable to advanced inference models from OpenAI at a low training cost and has been open-sourced, allowing tech firms to build AI applications and chip manufacturers to optimize for the model.</p><p>➄ Andrew Feldman emphasizes that while DeepSeek poses some risks, users should exercise basic judgment, as seen with the use of electric saws.</p>
Related Articles
- AMD signs AI chip-supply deal with OpenAI, gives it option to take a 10% stake28 days ago
 - OpenAI and AMD announce multibillion-dollar partnership — AMD to supply 6 gigawatts in chips, OpenAI could get up to 10% of AMD shares in return28 days ago
 - AI bubble conundrum29 days ago
 - Key ASIC Berhad Signs RM1.11 Million Contract to Jointly Develop AI-Driven, Ultra-Low Power RF Navigation Chip with Middle East Partnerabout 1 month ago
 - Make Alibaba Great Againabout 1 month ago
 - $37 billion 'Stargate of China' project takes shape — country is converting farmland into data centers to centralize AI compute powerabout 1 month ago
 - Advanced Motion Sensing For Smart Glassesabout 1 month ago
 - AI-Native Processor Redefines Edge Performanceabout 1 month ago
 - China objects to US chip hand-me-downsabout 2 months ago
 - China bans its biggest tech companies from acquiring Nvidia chips, says report — Beijing claims its homegrown AI processors now match H20 and RTX Pro 6000Dabout 2 months ago