<p>➀ Researchers at Sejong University developed STAU, a hardware accelerator enabling edge devices to run large AI models like BERT and GPT with 5.18× speedup over CPUs and 97% accuracy;</p><p>➁ The design uses a Variable Systolic Array (VSA) and Radix-2 softmax optimization to reduce computational complexity and power consumption, cutting processing time by 68% for long inputs;</p><p>➂ Implemented on FPGA with a custom 16-bit floating-point format, STAU supports multiple transformer models via software updates, advancing on-device AI deployment without cloud dependency.</p>
Related Articles
- Optoelectronics Silicon For AI Interconnect4 months ago
- Contactless Timing for Paralympic Swimming4 months ago
- AI Studio Improves SoC Designer Productivity By 10X4 months ago
- Fishing4 months ago
- Ed Tackles PIP4 months ago
- Petaflop-Scale AI Supercomputer4 months ago
- AI Robots Getting Better At Human Tasks4 months ago
- Power Solutions For Data Centers4 months ago
- Watch Jensen Huang’s Nvidia GTC 2025 keynote here — Blackwell 300 AI GPUs expected4 months ago
- Reprogramming Liver Immunity: A Lipid Nanoparticle Approach for Pancreatic Cancer Therapy4 months ago