<p>➀ Cerebras Systems' wafer-scale AI chip (WSE-3) outperforms the fastest GPUs by 57 times in executing DeepSeek-R1 models with 70 billion parameters.</p><p>➁ Cerebras CEO Andrew Feldman states that enterprise clients are highly enthusiastic about DeepSeek's new R1 inference model, with a surge in demand within ten days of its launch.</p><p>➂ The WSE-3 chip, made on a 12-inch wafer with TSMC's 5nm process, has 4 trillion transistors, 900,000 AI cores, 44GB on-chip SRAM, and a total memory bandwidth of 21PB/s, with a peak performance of 125 FP16 PetaFLOPS.</p><p>➃ DeepSeek-R1 offers performance comparable to advanced inference models from OpenAI at a low training cost and has been open-sourced, allowing tech firms to build AI applications and chip manufacturers to optimize for the model.</p><p>➄ Andrew Feldman emphasizes that while DeepSeek poses some risks, users should exercise basic judgment, as seen with the use of electric saws.</p>
Related Articles
- Building Future-Ready AI Hardware With Neuromorphic Computing And Sensingabout 1 month ago
- Socionext licenses ADAS IP from aiMotiveabout 1 month ago
- Elon Musk confirms xAI is buying an overseas power plant and shipping the whole thing to the U.S. to power its new data center — 1 million AI GPUs and up to 2 Gigawatts of power under one roof, equivalent to powering 1.9 million homes2 months ago
- New Flexible Material Senses Temperature Without External Power2 months ago
- Arteris Wins “AI Engineering Innovation Award” at the 2025 AI Breakthrough Awards2 months ago
- 75th ECTC packaging themes2 months ago
- BMFTR Project SPINNING: Results Show Enormous Potential of Spin-Photon-Based Quantum Computers3 months ago
- AWS Chips Away at Nvidia’s Lead with Rising Demand for Custom AI Processors3 months ago
- PCI-SIG Releases PCIe 7.0 Specifications and Optical PCIe3 months ago
- System On Module For Edge AI3 months ago