https://www.geeknetic.es/Noticia/37517/Samsung-pisa-el-acelerador-con-HBM4-mas-DRAM-al-mes-y-una-senal-clara-para-la-industria-de-la-IA.html

https://www.geeknetic.es/Noticia/37517/Samsung-pisa-el-acelerador-con-HBM4-mas-DRAM-al-mes-y-una-senal-clara-para-la-industria-de-la-IA.html

In the AI ​​debate there is a lot of talk about GPUs, but memory is the other half of the muscle. If the accelerator calculates quickly and the memory does not feed it data at the same speed, the system waits. And waiting, in a data center, is money evaporating. That is why the movement of Samsung around HBM4 is more important than it seems: The company plans to increase DRAM production for this new generation of high-bandwidth memory.

The leap in scale, as it is being handled, is forceful. Last year, a “1c” DRAM line capable of around 60,000 to 70,000 wafers per month was built. If the plan is confirmed, monthly production would approach 200,000 wafers. That figure is not announced to look good: it is a bet on volume and sustained demand.

Why HBM4 has become a strategic issue

HBM (High Bandwidth Memory) is not an ordinary DRAM. It is stacked and integrated very closely with the graphics processor or accelerator, aiming to deliver enormous bandwidth and better efficiency than more traditional solutions. In AI loads, that translates into something immediate: fewer bottlenecks and more useful performance.

HBM4 comes at a time when the market is no longer discussing whether AI is going to be used, but how much and where. Model training, real-time inference, video analysis, internal assistants in companies… All of this multiplies the number of chips deployed and, therefore, multiplies the demand for memory “to match.”

A key clue: 4 nm at full capacity for the “base die”

A detail that stands out is that the 4nm line in Pyeongtaek S5 would be operating at full capacity and is mentioned as the one that produces the “base die” for HBM4. This type of information usually appears when a phase stops being an experiment and becomes industrial preparation.

At HBM, base die and packaging are not a formality. Stacked memory requires a very fine process for the assembly to perform, dissipate heat and meet the stability requirements requested by large buyers. If that part is already “full”, the reading is that Samsung is aligning the entire chain, not just a loose piece.

From 70,000 to 200,000 wafers: what the change in scale implies

Tripling potential capacity is not about tightening a screw. It is deciding that you want to be well positioned when the market goes from “pilot” to massive deployments. In semiconductors, increasing wafers per month means investing, securing materials and assuming that you will need orders to fill that capacity.

Where does this demand come from? From the AI ​​raceYeah, but also the need to improve performance per server. When a company buys accelerators, it wants to get the most for every euro: more parallel models, more inferences per second, less consumption per task. New generation HBM helps achieve this objective and that is why memory contracts are negotiated almost as if they were energy supplies.

The “this month” and the client list: the data that weighs the most

The information also indicates that Samsung would be preparing to supply HBM4 to major customers such as NVIDIA and AMD “this month.” That “this month” does not necessarily mean huge volumes from minute one, but it does suggests that the validation and delivery phase of initial batches is mature.

In this market, entering the supply chain of large accelerators works as an accreditation. Not only memory is sold: Confidence is gained and the door opens to more projects. And, from there, the next step is volume contracts, where the winner is not whoever has the best PowerPoint, but whoever delivers consistently.

Why this affects you even if you don’t have a data center

HBM4 does not remain in a CPD. If high-bandwidth memory becomes scarce or expensive, the cost of training and serving models goes up, and that trickles down to AI services: more expensive subscriptions, harder limits or slower deployments. If, on the contrary, supply grows and competition tightens, AI becomes cheaper and reaches more everyday uses.

In short: Samsung’s move is a clear sign that HBM4 is no longer a distant promise. It is a centerpiece in the next wave of AI infrastructure, and the industry is preparing to consume it at scale.

The next thing will be to see how the yields per wafer evolve and if the packaging accompanies without bottlenecks. At HBM, manufacturing is important, but manufacturing well and on time is what determines who gets the big orders.