Recent #AI芯片 news in the semiconductor industry

4 months ago

➀ An engineer named Hao Tian gave up his 8 million yuan worth of stock options and quit before his company's IPO.

➁ Senior engineer Jun Yuan is optimistic about the company's future and believes that the company will achieve a good financial status after going public.

➂ Sales representative Han Lin faces significant pressure to sell AI chips, which must be operational for the company to generate revenue for IPO approval.

AI芯片IPO
8 months ago
➀ AMD launches its latest AI chip, server CPU, AI network card, DPU, and AI PC mobile processor; ➁ The flagship AI chip, AMD Instinct MI325X GPU, achieves a peak AI computing power of 21PFLOPS with HBM3E high-bandwidth memory; ➂ The new EPYC server CPU is powered by TSMC's 3/4nm process, with up to 192 cores and 384 threads, and priced at 14813 USD per unit; ➃ AMD EPYC 9575F processor offers up to 2.7 times the performance improvement over its predecessor in SPEC CPU tests; ➄ AMD's third-generation commercial AI mobile processor, Ryzen AI PRO 300 series, is designed for next-generation enterprise AI PC.
AI芯片AMDEPYC数据中心
9 months ago
➀ Broadcom's AI-related revenue is expected to reach $32.5 billion in the third quarter, tripling from the previous year. ➁ The company's total revenue for the second quarter was $124.87 billion, a 43% increase year-over-year. ➂ Analysts predict Broadcom's AI revenue will exceed $110 billion this year, with ASIC and networking products making up 65% and 35% respectively. ➃ Broadcom is seen as the second largest AI chip supplier globally, trailing only Nvidia, and leads in custom chip market share for 7nm, 5nm, and 3nm technologies. ➄ Citibank views Broadcom as the next hot AI stock, citing growth in new customers and the acquisition of VMware.
AI芯片博通营收增长
9 months ago
➀ Cerebras Systems introduces the WSE-3 AI chip, designed for training the largest AI models with 5nm technology and 4 trillion transistors. ➁ The chip features 900,000 AI-optimized cores, offering 125 petaFLOPS of peak AI performance. ➂ Cerebras targets the inference market with its new product, claiming to generate 1,800 tokens per second, significantly outperforming Nvidia's H100. ➃ The company utilizes SRAM for high bandwidth, achieving 21 PBps, contrasting with Nvidia's HBM3e at 4.8 TBps. ➄ Cerebras plans to support more models and aims to provide competitive pricing, starting at 10 cents per million tokens.
AI芯片Cerebras推理服务
9 months ago
➀ Cerebras Systems introduces the WSE-3 AI chip, designed for training the largest AI models with 5nm technology and 4 trillion transistors. ➁ The chip features 900,000 AI-optimized cores, offering 125 petaFLOPS of peak AI performance. ➂ Cerebras targets the inference market with its new product, claiming to generate 1,800 tokens per second, significantly outperforming Nvidia's H100. ➃ The company utilizes SRAM for high bandwidth, achieving 21 PBps, contrasting with Nvidia's HBM3e at 4.8 TBps. ➄ Cerebras plans to support more models and aims to provide competitive pricing, starting at 10 cents per million tokens.
AI芯片Cerebras推理服务
9 months ago
➀ IBM introduces the Telum II processor with AI acceleration capabilities, designed for large-scale machine learning and large language models. ➁ FuriosaAI unveils the RNGD, a Tensor Contraction Processor (TCP) optimized for high-performance, efficient large language model inference in data centers. ➂ RNGD features a 5nm process, 40 billion transistors, and supports up to 512TFLOPS (FP8) and 1024TOPS (INT4) with a TDP of only 150W, significantly lower than typical gaming GPUs.
AI芯片数据中心能效比