<p>➀ Fujitsu developed a generative AI reconstruction technology using 1-bit quantization and AI distillation, reducing memory usage by 94% and achieving 3x faster inference while retaining 89% accuracy.</p><p>➁ The method enables large AI models like Takane LLM to run on low-end GPUs and edge devices (e.g., smartphones, industrial machines), improving data security and energy efficiency compared to conventional approaches like GPTQ.</p><p>➂ The brain-inspired technology allows task-specific specialization, with trials planned for late 2025 and quantized models already released via Hugging Face. Applications show 11x speed gains in sales predictions and 10% accuracy improvements in image recognition.</p>
Related Articles
- AI bubble conundrum28 days ago
- Advanced Motion Sensing For Smart Glassesabout 1 month ago
- AI Accelerator For Edge Devicesabout 2 months ago
- Light Powered Chip Enhances AI Efficiencyabout 2 months ago
- Qualcomm adds UHF RFID to mobile processor2 months ago
- Ed Goes For Humanoid Robots3 months ago
- Building Future-Ready AI Hardware With Neuromorphic Computing And Sensing3 months ago
- China forms AI alliances to cut U.S. tech reliance — Huawei among companies seeking to create unified tech stack with domestic-powered standardization3 months ago
- Transformer Accelerator Brings Large AI Models To Devices3 months ago
- System On Module For Edge AI5 months ago