Recent #LLMs news in the semiconductor industry

11 months ago
1. Denodo, a data management provider, has appointed Christophe Culine as its first chief revenue officer (CRO) to accelerate global growth. 2. Culine brings over 25 years of experience in leading technology companies and has previously held roles at Dragos, RiskIQ, Qualys, Fortinet, and Venafi. 3. Denodo's data virtualization capabilities have significantly reduced IT costs and accelerated data access for customers like the FAA.
LLMsRAG
2 months ago

➀ Large Language Models (LLMs) have revolutionized the field of natural language processing (NLP) with their breakthroughs, leading to a new wave of technological progress. These models, however, are traditionally deployed on cloud servers, which bring challenges such as network latency, data security, and continuous internet connectivity requirements, limiting their widespread application and user experience.

➁ Storage computing integrates storage and computing, adding computing capabilities to the memory to perform two-dimensional and three-dimensional matrix calculations. It can effectively overcome the bottleneck of the von Neumann architecture and achieve a significant increase in computing energy efficiency.

➂ There are three types of storage computing: Proximity Memory Computing (PNM), In-Memory Processing (PIM), and In-Memory Computing (CIM). Each type has its own advantages and is suitable for different application scenarios.

DRAM technologyLLMs
11 months ago
1. Researchers from UC Santa Cruz have discovered a method to run large language models (LLMs) at a mere 13 watts without compromising performance. 2. The key to this efficiency is the elimination of matrix multiplication in LLM processing, which, when optimized, significantly boosts performance-per-watt. 3. The broader applicability of this approach to AI in general is yet to be determined.
AI researchLLMsenergy efficiency