<p>➀ Fujitsu developed a generative AI reconstruction technology using 1-bit quantization and AI distillation, reducing memory usage by 94% and achieving 3x faster inference while retaining 89% accuracy.</p><p>➁ The method enables large AI models like Takane LLM to run on low-end GPUs and edge devices (e.g., smartphones, industrial machines), improving data security and energy efficiency compared to conventional approaches like GPTQ.</p><p>➂ The brain-inspired technology allows task-specific specialization, with trials planned for late 2025 and quantized models already released via Hugging Face. Applications show 11x speed gains in sales predictions and 10% accuracy improvements in image recognition.</p>