<p>➀ The increasing complexity of AI models and the exponential growth of network numbers and types have led to a dilemma for chip manufacturers between fixed function acceleration and programmable accelerators;</p><p>➁ The general AI processing methods are not up to standard, and focusing on specific use cases or workloads can achieve greater power saving and better performance in smaller space;</p><p>➂ The trend of AI algorithm complexity is increasing, and the number of floating-point operations is increasing, and the trend line is only pointing upwards and to the right.</p>
Related Articles
- AMD signs AI chip-supply deal with OpenAI, gives it option to take a 10% stake28 days ago
 - OpenAI and AMD announce multibillion-dollar partnership — AMD to supply 6 gigawatts in chips, OpenAI could get up to 10% of AMD shares in return28 days ago
 - AI bubble conundrum28 days ago
 - Key ASIC Berhad Signs RM1.11 Million Contract to Jointly Develop AI-Driven, Ultra-Low Power RF Navigation Chip with Middle East Partnerabout 1 month ago
 - Make Alibaba Great Againabout 1 month ago
 - $37 billion 'Stargate of China' project takes shape — country is converting farmland into data centers to centralize AI compute powerabout 1 month ago
 - Advanced Motion Sensing For Smart Glassesabout 1 month ago
 - AI-Native Processor Redefines Edge Performanceabout 1 month ago
 - China objects to US chip hand-me-downsabout 2 months ago
 - China bans its biggest tech companies from acquiring Nvidia chips, says report — Beijing claims its homegrown AI processors now match H20 and RTX Pro 6000Dabout 2 months ago