➀ Cerebras Systems introduces the WSE-3 AI chip, designed for training the largest AI models with 5nm technology and 4 trillion transistors. ➁ The chip features 900,000 AI-optimized cores, offering 125 petaFLOPS of peak AI performance. ➂ Cerebras targets the inference market with its new product, claiming to generate 1,800 tokens per second, significantly outperforming Nvidia's H100. ➃ The company utilizes SRAM for high bandwidth, achieving 21 PBps, contrasting with Nvidia's HBM3e at 4.8 TBps. ➄ Cerebras plans to support more models and aims to provide competitive pricing, starting at 10 cents per million tokens.
Related Articles
- Cerebras CEO: DeepSeek Triggers Surge in Enterprise Demand for AI Chips!4 months ago
- The Bittersweet Symphony of AI Chip Companies on the Eve of IPOs4 months ago
- Wafer Scale Engines For AI Efficiency6 months ago
- 国产AI芯片追赶NVIDIA的五个关键方面6 months ago
- OpenAI execs mused over Cerebras acquisition in 2017 — to mitigate predicted Nvidia supply woes7 months ago
- AMD Unveils Its Most Powerful Model Chip! Five Core New Products Target AI, Flagship CPU at 100,000 Yuan, Supported by OpenAI, Microsoft, and Meta8 months ago
- Broadcom's AI Chip Revenue Surges, Nearing Nvidia's Dominance9 months ago
- Hot Chips 2024: AI Innovation and Efficiency in Data Center Chips9 months ago