Recent #HBM4 news in the semiconductor industry

5 months ago

➀ SK hynix has achieved a 70% yield rate on its HBM4 12-Hi memory, which is set to be used in NVIDIA's upcoming Rubin R100 AI GPUs.

➁ The test yield is an indicator that can be used to make estimations on the actual future yield rate, with SK hynix aiming for a yield upwards and into the late 90% range.

➂ TSMC, as a key partner, is expected to expand its CoWoS advanced packaging capacity to handle the large Rubin chip demand.

HBM4NVIDIARubin AI GPUsSK hynix
about 1 year ago
1. TSMC and Global Unichip have secured bulk orders for base dies used in SK hynix's next-gen HBM4 memory. 2. The collaboration between TSMC and Creative is focused on developing HBM key peripheral components for AI servers. 3. The industry anticipates significant changes in HBM4, including increased stack height and the integration of logic ICs to enhance bandwidth transmission speeds.
Global UnichipHBM4TSMC
9 months ago

➀ TSMC's HBM4 memory launch brings significant changes, with the most noticeable being the expansion of memory interfaces from 1024 to 2048 bits;

➁ TSMC revealed details about base die for HBM4 manufacturing using improved versions of its N12 and N5 processes at the 2024 European Technology Symposium;

➂ TSMC plans to adopt two different manufacturing processes, N12FFC+ and N5, for the first batch of HBM4 product packaging;

➃ TSMC is working with major HBM memory suppliers like Micron, Samsung, and SK Hynix to integrate HBM4 memory technology using advanced process nodes;

➄ TSMC's N12FFC+ process is suitable for achieving HBM4 performance, allowing memory manufacturers to build 12-Hi (48GB) and 16-Hi (64GB) stacks with over 2TB/s bandwidth;

➅ TSMC's N5 process will integrate more logic functions, reduce power consumption, and provide higher performance with very small interconnect spacing, enabling HBM4 direct 3D stacking on logic chips.

5nm TechnologyAdvanced PackagingHBM4Memory ChipsTSMC
10 months ago
➀ HBM4 is the key to advancing AI by providing high capacity and performance for large-scale data-intensive applications; ➁ HBM4 improves AI and ML performance through increased bandwidth and memory density, reducing bottlenecks and improving system performance; ➂ HBM4 is designed with energy efficiency in mind, achieving better performance per watt and is crucial for the sustainability of large-scale AI deployments; ➃ HBM4's scalability allows for growth without becoming too expensive or inefficient, making it crucial for deploying AI in various applications.
AIAI PerformanceHBM4Memory Technologyenergy efficiency
11 months ago
➀ HBM4 is set to double the channel width of HBM3E, significantly enhancing data transfer speeds and performance. ➁ SK Hynix and Samsung are in a fierce race to be the first to mass-produce HBM4, with both aiming to supply Nvidia's AI chips. ➂ The integration of memory and logic semiconductors in a single package remains a significant challenge for HBM4 development.
HBM4High-Performance Memorysemiconductor industry