<p>➀ Fujitsu developed a generative AI reconstruction technology using 1-bit quantization and AI distillation, reducing memory usage by 94% and achieving 3x faster inference while retaining 89% accuracy.</p><p>➁ The method enables large AI models like Takane LLM to run on low-end GPUs and edge devices (e.g., smartphones, industrial machines), improving data security and energy efficiency compared to conventional approaches like GPTQ.</p><p>➂ The brain-inspired technology allows task-specific specialization, with trials planned for late 2025 and quantized models already released via Hugging Face. Applications show 11x speed gains in sales predictions and 10% accuracy improvements in image recognition.</p>
Related Articles
- Building Future-Ready AI Hardware With Neuromorphic Computing And Sensingabout 1 month ago
- System On Module For Edge AI3 months ago
- AI Chip For Quick Heart Attack Detection4 months ago
- VSORA raises $46m for inference IC4 months ago
- Ed Gets Into Humanoid Robots4 months ago
- World’s First UCIe Optical Chiplet5 months ago
- Symposium on VLSI Technology & Circuits in Kyoto,5 months ago
- Prototype of a Particularly Sustainable and Energy-Autonomous E-Bike Terminal Developed at HKA5 months ago
- Smart and Compact Sensors with Edge-AI5 months ago
- Enhancing Chitosan Films with Silanized Hexagonal Boron Nitride for Sustainable Applications5 months ago