➀ Cambricon Technologies achieves its first quarterly profit in late 2024; ➁ Revenue surged nearly 70% in 2024; ➂ Cambricon's stock skyrocketed over 470% in the past year.
Recent #AI Processor news in the semiconductor industry
➀ OpenAI considered acquiring Cerebras in 2017 to reduce its reliance on Nvidia; ➁ The deal was shelved due to potential conflicts of interest; ➂ Cerebras is now preparing for an IPO and has raised $715 million.
➀ Arm's Total Design initiative has doubled in size within a year; ➁ Samsung Foundry, ADTechnology, and Rebellions collaborate on a 2nm AI processor; ➂ The AI CPU chiplet platform is designed for AI/ML training, cloud computing, and high-performance computing workloads.
➀ IBM introduces the Telum II Processor, featuring a new data processing unit to enhance computing efficiency and accelerate complex I/O protocols. ➁ The Telum II chip, built on a 5-nanometer process, offers a 40% improvement in cache and integrated AI accelerator core performance. ➂ The IBM Spyre Accelerator, designed for AI workloads, supports advanced AI models and ensemble methods, enhancing performance in specialized applications like fraud detection and generative AI.
❶ SoftBank Group has acquired Graphcore, a UK-based AI processor designer, for an undisclosed amount. ❷ Graphcore will become a wholly owned subsidiary of SoftBank, maintaining its brand and architecture. ❸ The acquisition could either bolster Softbank's Project Izanagi or operate independently, aiming to compete against market leaders like Nvidia.
1. Sohu AI chip promises to run AI models 20 times faster and at a lower cost compared to Nvidia's H100 GPUs. 2. The chip is based on the transformer AI architecture, which is expected to dominate the AI field. 3. The success of Sohu's gamble on the transformer architecture seems imminent.
1. Ceva has announced a neural network processing core, NPN32 and NPN64, designed for SoCs running TinyML models. 2. NPN32 features 32 int8 MACs and is optimized for voice, audio, object detection, and anomaly detection. 3. NPN64 offers 64 int8 MACs, providing double the performance with enhanced features for more complex AI tasks.