Samsung Electronics’ most recent high bandwidth memory (HBM) chips are encountering obstacles in passing Nvidia’s tests for integration into the U.S. company’s AI processors, according to three individuals briefed on the matter. These challenges are related to heat management and power consumption, particularly affecting Samsung’s HBM3 chips, the fourth-generation HBM standard widely used in graphics processing units (GPUs) for artificial intelligence, as well as the upcoming fifth-generation HBM3E chips.
The specific reasons behind Samsung’s failure to meet Nvidia’s testing criteria are being disclosed for the first time. Samsung, in response, emphasized that HBM is a specialized memory product requiring close alignment with customer requirements for optimization. The company stated it is actively engaged in optimizing its products through collaborative efforts with customers, although it refrained from commenting on individual clients.
Following the initial publication of this report, Samsung issued separate statements refuting claims of failure due to heat and power consumption issues, asserting that testing is progressing according to plan. Nvidia declined to provide any comment on the matter.
HBM, a type of dynamic random access memory (DRAM) standard introduced in 2013, involves vertically stacking chips to conserve space and reduce power consumption, making it crucial for processing large volumes of data generated by complex AI applications. With the surge in demand for advanced GPUs amid the flourishing generative AI landscape, the importance of satisfying Nvidia, which holds approximately 80% of the global GPU market for AI applications, cannot be overstated. Success in meeting Nvidia’s requirements is deemed pivotal for the future growth and reputation of HBM manufacturers.