1. Ceva has announced a neural network processing core, NPN32 and NPN64, designed for SoCs running TinyML models. 2. NPN32 features 32 int8 MACs and is optimized for voice, audio, object detection, and anomaly detection. 3. NPN64 offers 64 int8 MACs, providing double the performance with enhanced features for more complex AI tasks.
Related Articles
- Ceva-XC21 and Ceva-XC23 DSPs: Advancing Wireless and Edge AI Processing2 months ago
- China's Cambricon posts first profit as demand for this Nvidia rival's AI processors explodes4 months ago
- IP adds Bluetooth High Data Throughput to IEEE 802.15.4 radio5 months ago
- AI PC momentum building with business adoption anticipated5 months ago
- Tiny Machine Learning For Printed Electronics6 months ago
- OpenAI execs mused over Cerebras acquisition in 2017 — to mitigate predicted Nvidia supply woes6 months ago
- Arm's partners develop 2nm AI processor using Arm's Neoverse — Arm's Total Design ecosystem doubles in size on first anniversary8 months ago
- IBM Unveils New Telum II AI Processor Set to Power Next-Generation Mainframe Systems9 months ago
- New AI Explorer Tier With Free Access10 months ago
- Softbank takes over AI processor designer Graphcore11 months ago