Recent #AI research news in the semiconductor industry

4 months ago

The ETH Zurich researchers have developed a method that makes AI answers more reliable over time. Their algorithm is highly selective in choosing data. Additionally, up to 40 times smaller AI models can achieve the same output performance as the best large AI models.

ChatGPT and similar tools often amaze us with the accuracy of their answers, but also often lead to doubt. One of the big challenges of powerful AI response machines is that they serve us with perfect answers and obvious nonsense with the same ease. One of the major challenges is how the underlying large language models (LLMs) of AI deal with uncertainty. It has been very difficult until now to judge whether LLMs focused on text processing and generation generate their answers on a solid foundation of data or whether they are on uncertain ground.

Researchers from the Institute for Machine Learning at the Department of Computer Science at ETH Zurich have now developed a method to specifically reduce the uncertainty of AI. 'Our algorithm can specifically enrich the general language model of AI with additional data from the relevant thematic area of the question. In combination with the specific question, we can then specifically retrieve those relationships from the depths of the model and from the enrichment data that are likely to generate a correct answer,' explains Jonas Hübotter from the Learning & Adaptive Systems Group, who developed the new method as part of his PhD studies.

AIAI EthicsAI researchAlgorithmData ProcessingETH Zurichmachine learning
6 months ago

➀ Huawei Noah's Ark Lab undergoes leadership change with new director, Wang Yunhe, a 90-year-old successor to Yao Jun;

➁ Yao Jun, the former director, has an extensive background in AI research, including deep learning and AI hetero-systems;

➂ Wang Yunhe has a strong academic and professional background in AI, including deep learning, model compression, and computer vision, and has been involved in significant projects at Huawei.

AI research
11 months ago
➀ OpenAI is facing a challenging period with high-level leadership changes and a potential setback in its massive financing round; ➁ Apple's decision not to participate in the latest $6.5 billion round could affect the company's structure and strategy; ➂ Concerns are raised about OpenAI's focus on product development over research, and the impact on its culture and mission.
AI researchAppleFinancingLeadershipOpenAI
about 1 year ago
1. Researchers from UC Santa Cruz have discovered a method to run large language models (LLMs) at a mere 13 watts without compromising performance. 2. The key to this efficiency is the elimination of matrix multiplication in LLM processing, which, when optimized, significantly boosts performance-per-watt. 3. The broader applicability of this approach to AI in general is yet to be determined.
AI researchLLMsenergy efficiency