➀ NVIDIA确认Blackwell Ultra和Vera Rubin AI架构进展顺利,预计在GTC 2025上将有重大宣布;
➁ 尽管初期有延误,但生产已加速,保持了年度发布节奏;
➂ 新一代Rubin AI GPU将采用先进的HBM4内存,SK hynix、三星和美光等合作伙伴参与其中。
➀ NVIDIA确认Blackwell Ultra和Vera Rubin AI架构进展顺利,预计在GTC 2025上将有重大宣布;
➁ 尽管初期有延误,但生产已加速,保持了年度发布节奏;
➂ 新一代Rubin AI GPU将采用先进的HBM4内存,SK hynix、三星和美光等合作伙伴参与其中。
➀ SK hynix has achieved a 70% yield rate on its HBM4 12-Hi memory, which is set to be used in NVIDIA's upcoming Rubin R100 AI GPUs.
➁ The test yield is an indicator that can be used to make estimations on the actual future yield rate, with SK hynix aiming for a yield upwards and into the late 90% range.
➂ TSMC, as a key partner, is expected to expand its CoWoS advanced packaging capacity to handle the large Rubin chip demand.
➀ Samsung and SK Hynix are competing to supply HBM4 to Broadcom, a rising competitor against NVIDIA.
➁ SK Hynix has received a request from Broadcom for a large supply of custom HBM4.
➂ Samsung is also in discussions with Broadcom about supplying HBM4.
➀ TSMC's HBM4 memory launch brings significant changes, with the most noticeable being the expansion of memory interfaces from 1024 to 2048 bits;
➁ TSMC revealed details about base die for HBM4 manufacturing using improved versions of its N12 and N5 processes at the 2024 European Technology Symposium;
➂ TSMC plans to adopt two different manufacturing processes, N12FFC+ and N5, for the first batch of HBM4 product packaging;
➃ TSMC is working with major HBM memory suppliers like Micron, Samsung, and SK Hynix to integrate HBM4 memory technology using advanced process nodes;
➄ TSMC's N12FFC+ process is suitable for achieving HBM4 performance, allowing memory manufacturers to build 12-Hi (48GB) and 16-Hi (64GB) stacks with over 2TB/s bandwidth;
➅ TSMC's N5 process will integrate more logic functions, reduce power consumption, and provide higher performance with very small interconnect spacing, enabling HBM4 direct 3D stacking on logic chips.