https://www.geeknetic.es/Noticia/37441/Qualcomm-y-AMD-ponen-el-ojo-en-SOCAMM2-la-memoria-modular-de-bajo-consumo-que-trabajo-redefinir-la-IA-en-servidores.html
In data centers, chips are talked about as if they were trading cards, but many decisions are made for something less flashy: memory. Not just how much there is, but how it is integrated, how much it heats, and what room it leaves to scale a platform without redesigning it from scratch. With generative AI pushing ahead, the race is no longer just about computing. It is about feeding the accelerators with data without causing consumption and cooling to skyrocket.
That’s where SOCAMM2, a memory format, comes in. based on LPDDR5 and LPDDR5X designed for AI systems in data centers. The news is that interest would be expanding: Qualcomm and AMD would be exploring its adoption for AI products.
What is SOCAMM2 and why does it matter?
SOCAMM2 means Small Outline Compression Attached Memory Module. In practice, it is a compact module that is fixed with a compression connector and allows LPDDR memory to be mounted in a replaceable format, without permanent soldering. The promise is simple: take advantage of the efficiency of LPDDR, but with modularity and server environment maintenance.
The underlying reason is energy. LPDDR was born for mobile phones and laptops, where every watt counts. In AI, that logic turns into gold: when a cluster spends hours at full load, any efficiency improvement is multiplied in electrical cost and density per rack.
JEDEC makes it a safe bet
So that such a format does not remain a rarity, it needs a common framework. JEDEC announced the final phase of JESD328the SOCAMM2 standard for LPDDR5 and LPDDR5X modules aimed at AI applications in data centers. With standardization, manufacturers can produce with more confidence and designers can plan platforms several years out.
In addition, SOCAMM2 is associated with high speeds typical of LPDDR5X, with references to 9,600 MT per second, an important data when the priority is to move data quickly and keep latency contained.
Why Qualcomm and AMD are looking at SOCAMM2
That Qualcomm is looking at SOCAMM2 fits with its focus on AI infrastructure, where performance per watt is part of the product, not an extra. In inference systems at scale, memory can decide whether a design is economically viable: less power allows for more capacity within the same thermal budget.
At AMD, the interest It also has logic because of a key idea: In AI, power is not enough, it is necessary to balance bandwidth, capacity and cost. SOCAMM2 does not always replace very high bandwidth memories, but it can be attractive in configurations where efficiency and capacity scalability are more important than absolute peak. If the format is standardized and there are several providers, the barrier to entry lowers.
Micron and Samsung are already showing muscle
The memory ecosystem is already positioning itself. Micron announced high-capacity SOCAMM2 modules, including 192 GBhighlighting efficiency improvements and direct effects on inference such as reductions in time to first token in certain scenarios.
Samsung has also introduced SOCAMM2 as an LPDDR module for AI infrastructurehighlighting advantages over DDR5 RDIMMs in bandwidth and consumption, with figures that point to relevant improvements in energy efficiency.
What changes in the design of servers and accelerators
SOCAMM2 pushes to rethink the physical architecture. A compact, low-profile module facilitates density, can simplify on-board routing, and, being replaceable, offers a more data center-like maintenance cycle. This opens the door to adjusting capacities according to the client and allowing future expansions without redoing the entire system.
And there is another practical detail: By using LPDDR, thermal and consumption management can be kinder under sustained loads, just where AI punishes.
Today, Qualcomm and AMD are exploring SOCAMM2 It is not equivalent to a product advertised with a closed date. It remains to be known which specific families, which configuration of modules and which volumes. But the pattern is drawn: standard by JEDEC, high-capacity modules already announced and an industry obsessed with performance per watt.
It is also a sign of market maturity: when a part is standardized and considered by multiple actors, tools, validations and supply chains appear around it. For the end customerthat usually translates into less risk and more configuration options. And for the operatorsin a no small advantage: being able to replace or expand memory without having to remove an entire node due to a decision made at the factory.
If SOCAMM2 takes off, 2026 may be the year when memory stops being an invisible component in AI and becomes a central part of the plot. Because the near future will not be decided only by how many TOPS fit on a card. It will be decided by how much AI you can sustain, for how long and at what electrical cost.
