Samsung will use chip manufacturing technology favored by SK Hynix
The competition in HBM is becoming increasingly fierce, forcing Samsung to bow down.
Five sources have stated that Samsung Electronics plans to adopt a chip manufacturing technology advocated by its competitor SK Hynix, as the world's top manufacturer of memory chips seeks to catch up in the race to produce high-end chips used to power artificial intelligence (AI).
With the growing popularity of generative AI, the demand for high bandwidth memory (HBM) chips has surged. However, unlike its peers SK Hynix and Micron Technology, Samsung has been conspicuous in its deal to supply the latest HBM chips to AI chip leader Nvidia.
Analysts and industry observers say one reason for Samsung's lag is its decision to stick with a chip manufacturing technology known as non-conductive film (NCF), which leads to some production issues, while Hynix switched to a method using mass reflow underfill (MR-MUF) to address the weaknesses of NCF.
Recently, however, Samsung has issued orders for chip manufacturing equipment aimed at handling MUF technology, according to three sources directly familiar with the matter.
"Samsung had to take some measures to increase its HBM output... Adopting MUF technology is somewhat of a reluctant move for Samsung, as it ultimately followed the technology first used by SK Hynix," said one of the sources.
Advertisement
According to several analysts, Samsung's HBM3 chip output is about 10% to 20%, while SK Hynix's HBM3 output is about 60% to 70%.HBM3 and HBM3E are the latest versions of HBM chips, which are in high demand. They are bundled with core microprocessor chips to assist in handling vast amounts of data in generative AI.
A source said that Samsung is also in talks with material manufacturers, including Japan's Nagase, to procure MUF materials. The large-scale production of high-end chips using more MUF is not expected to be ready until next year, as Samsung requires additional testing.
Three sources also indicated that Samsung plans to use NCF and MUF technologies in its latest HBM chips.
Samsung stated that its internally developed NCF technology is the "best solution" for HBM products and will be utilized in its new HBM3E chips. "We are proceeding with the HBM3E product business as planned," said Samsung.
NVIDIA and Nagase declined to comment.
Samsung's plan to use MUF highlights the increasing pressure it faces in the AI chip race. According to research firm TrendForce, the HBM chip market has more than doubled this year due to AI-related demand, reaching nearly $9 billion.
NCF and MUF Technologies
Non-conductive film (NCF) chip manufacturing technology has been widely used by chipmakers to stack multiple layers of chips in compact high-bandwidth memory chip sets, as using thermal compression films helps to minimize the space between stacked chips.
However, as more layers are added, the manufacturing process becomes complex, often with issues related to the adhesive materials. Samsung stated that its latest HBM3E chip has 12 chip layers. Chipmakers have been seeking alternatives to address these weaknesses.SK Hynix has taken the lead in successfully adopting quality reflow mold bottom fill technology, becoming the first supplier to provide HBM3 chips to NVIDIA.
KB Securities analyst Jeff Kim said that SK Hynix's market share in NVIDIA's HBM3 and more advanced HBM products is estimated to exceed 80% this year.
Micron has joined the high-bandwidth memory chip race, announcing that its latest HBM3E chips will be adopted by NVIDIA, powering the latter's H200 Tensor chip, which will begin shipping in the second quarter.
Samsung's HBM3 series has not yet qualified for NVIDIA's supply deals, according to one of four sources and another informed person. Samsung's setbacks in the AI chip race have also caught the attention of investors, with its stock price falling by 7% this year, lagging behind SK Hynix and Micron, which have risen by 17% and 14% respectively.
SK Hynix continues to push forward
SK Hynix Senior Package Development Director Hoyoung Son, speaking as a vice president, said: "Developing customer-specific AI memory requires a new approach, as the flexibility and scalability of the technology become crucial."
In terms of performance, HBM memory with a 1024-bit interface has developed quite rapidly: starting from a data transfer rate of 1GT/s in 2014-2015, to the recently launched HBM3E memory devices reaching 9.2 GT/s - 10 GT/s. With HBM4, the memory is set to transition to a 2048-bit interface, which will ensure a steady increase in bandwidth over HBM3E.
However, according to Hoyoung Son, some customers may benefit from differentiation (or semi-custom) solutions based on HBM.
"To achieve diverse AI, the characteristics of AI memory also need to become more diverse," Hoyoung Son said in an interview with BusinessKorea. "Our goal is to have a variety of advanced packaging technologies that can cope with these changes. We plan to provide differentiated solutions that can meet any customer's needs."For 2048-bit interfaces, many HBM4 solutions may be custom, or at least semi-custom, based on what we've learned from official and unofficial information about upcoming standards. Some customers may wish to continue using interposers (which will be very expensive this time), while others may prefer direct bonding technology to attach HBM4 modules directly to the logic chips, which is also costly.
Manufacturing differentiated HBM products requires complex packaging technologies, including (but not limited to) SK Hynix's MR-MUF technology. Given the company's extensive experience in HBM, it is likely to come up with something else, especially differentiated products.
Hoyoung Son said, "Our goal is to have a range of advanced packaging technologies to address the changing technological landscape. Looking ahead, we plan to offer differentiated solutions to meet the needs of all our customers."
*Disclaimer: This article is the original creation of the author. The content of the article represents their personal views, and our reposting is solely for sharing and discussion, not an endorsement or agreement. If you have any objections, please contact the backend.
The competition in HBM is becoming increasingly fierce, forcing Samsung to bow down.
Five sources have stated that Samsung Electronics plans to adopt a chip manufacturing technology advocated by its competitor SK Hynix, as the world's top manufacturer of memory chips seeks to catch up in the race to produce high-end chips used to power artificial intelligence (AI).
With the growing popularity of generative AI, the demand for high bandwidth memory (HBM) chips has surged. However, unlike its peers SK Hynix and Micron Technology, Samsung has been conspicuous in its deal to supply the latest HBM chips to AI chip leader Nvidia.
Analysts and industry observers say one reason for Samsung's lag is its decision to stick with a chip manufacturing technology known as non-conductive film (NCF), which leads to some production issues, while Hynix switched to a method using mass reflow underfill (MR-MUF) to address the weaknesses of NCF.
Recently, however, Samsung has issued orders for chip manufacturing equipment aimed at handling MUF technology, according to three sources directly familiar with the matter.
"Samsung had to take some measures to increase its HBM output... Adopting MUF technology is somewhat of a reluctant move for Samsung, as it ultimately followed the technology first used by SK Hynix," said one of the sources.
Advertisement
According to several analysts, Samsung's HBM3 chip output is about 10% to 20%, while SK Hynix's HBM3 output is about 60% to 70%.HBM3 and HBM3E are the latest versions of HBM chips, which are in high demand. They are bundled with core microprocessor chips to assist in handling vast amounts of data in generative AI.
A source said that Samsung is also in talks with material manufacturers, including Japan's Nagase, to procure MUF materials. The large-scale production of high-end chips using more MUF is not expected to be ready until next year, as Samsung requires additional testing.
Three sources also indicated that Samsung plans to use NCF and MUF technologies in its latest HBM chips.
Samsung stated that its internally developed NCF technology is the "best solution" for HBM products and will be utilized in its new HBM3E chips. "We are proceeding with the HBM3E product business as planned," said Samsung.
NVIDIA and Nagase declined to comment.
Samsung's plan to use MUF highlights the increasing pressure it faces in the AI chip race. According to research firm TrendForce, the HBM chip market has more than doubled this year due to AI-related demand, reaching nearly $9 billion.
NCF and MUF Technologies
Non-conductive film (NCF) chip manufacturing technology has been widely used by chipmakers to stack multiple layers of chips in compact high-bandwidth memory chip sets, as using thermal compression films helps to minimize the space between stacked chips.
However, as more layers are added, the manufacturing process becomes complex, often with issues related to the adhesive materials. Samsung stated that its latest HBM3E chip has 12 chip layers. Chipmakers have been seeking alternatives to address these weaknesses.SK Hynix has taken the lead in successfully adopting quality reflow mold bottom fill technology, becoming the first supplier to provide HBM3 chips to NVIDIA.
KB Securities analyst Jeff Kim said that SK Hynix's market share in NVIDIA's HBM3 and more advanced HBM products is estimated to exceed 80% this year.
Micron has joined the high-bandwidth memory chip race, announcing that its latest HBM3E chips will be adopted by NVIDIA, powering the latter's H200 Tensor chip, which will begin shipping in the second quarter.
Samsung's HBM3 series has not yet qualified for NVIDIA's supply deals, according to one of four sources and another informed person. Samsung's setbacks in the AI chip race have also caught the attention of investors, with its stock price falling by 7% this year, lagging behind SK Hynix and Micron, which have risen by 17% and 14% respectively.
SK Hynix continues to push forward
SK Hynix Senior Package Development Director Hoyoung Son, speaking as a vice president, said: "Developing customer-specific AI memory requires a new approach, as the flexibility and scalability of the technology become crucial."
In terms of performance, HBM memory with a 1024-bit interface has developed quite rapidly: starting from a data transfer rate of 1GT/s in 2014-2015, to the recently launched HBM3E memory devices reaching 9.2 GT/s - 10 GT/s. With HBM4, the memory is set to transition to a 2048-bit interface, which will ensure a steady increase in bandwidth over HBM3E.
However, according to Hoyoung Son, some customers may benefit from differentiation (or semi-custom) solutions based on HBM.
"To achieve diverse AI, the characteristics of AI memory also need to become more diverse," Hoyoung Son said in an interview with BusinessKorea. "Our goal is to have a variety of advanced packaging technologies that can cope with these changes. We plan to provide differentiated solutions that can meet any customer's needs."For 2048-bit interfaces, many HBM4 solutions may be custom, or at least semi-custom, based on what we've learned from official and unofficial information about upcoming standards. Some customers may wish to continue using interposers (which will be very expensive this time), while others may prefer direct bonding technology to attach HBM4 modules directly to the logic chips, which is also costly.
Manufacturing differentiated HBM products requires complex packaging technologies, including (but not limited to) SK Hynix's MR-MUF technology. Given the company's extensive experience in HBM, it is likely to come up with something else, especially differentiated products.
Hoyoung Son said, "Our goal is to have a range of advanced packaging technologies to address the changing technological landscape. Looking ahead, we plan to offer differentiated solutions to meet the needs of all our customers."
*Disclaimer: This article is the original creation of the author. The content of the article represents their personal views, and our reposting is solely for sharing and discussion, not an endorsement or agreement. If you have any objections, please contact the backend.