➀ Cerebras Systems introduces the WSE-3 AI chip, designed for training the largest AI models with 5nm technology and 4 trillion transistors. ➁ The chip features 900,000 AI-optimized cores, offering 125 petaFLOPS of peak AI performance. ➂ Cerebras targets the inference market with its new product, claiming to generate 1,800 tokens per second, significantly outperforming Nvidia's H100. ➃ The company utilizes SRAM for high bandwidth, achieving 21 PBps, contrasting with Nvidia's HBM3e at 4.8 TBps. ➄ Cerebras plans to support more models and aims to provide competitive pricing, starting at 10 cents per million tokens.
Related Articles
- Arm hires Amazon's AI chip developer, ostensibly to help create its own processors — Rami Sinno returns to the company, boasts Trainium and Inferentia on resume3 months ago
 - Nvidia and AMD reportedly sharing 15% of their China GPU revenue in exchange for export licenses — 'unprecedented' export revenue sharing deal may have been struck3 months ago
 - SpaceX launches UK satellite to create semiconductors in low Earth orbit — sub-zero temps and vacuum of space could advance AI data centers and quantum computing4 months ago
 - Cerebras CEO: DeepSeek Triggers Surge in Enterprise Demand for AI Chips!9 months ago
 - The Bittersweet Symphony of AI Chip Companies on the Eve of IPOs10 months ago
 - Wafer Scale Engines For AI Efficiency11 months ago
 - 国产AI芯片追赶NVIDIA的五个关键方面12 months ago
 - OpenAI execs mused over Cerebras acquisition in 2017 — to mitigate predicted Nvidia supply woes12 months ago
 - AMD Unveils Its Most Powerful Model Chip! Five Core New Products Target AI, Flagship CPU at 100,000 Yuan, Supported by OpenAI, Microsoft, and Metaabout 1 year ago
 - Broadcom's AI Chip Revenue Surges, Nearing Nvidia's Dominanceabout 1 year ago