https://www.geeknetic.es/Noticia/36266/Microsoft-elige-a-Intel-para-fabricar-su-chip-Maia-2-en-el-nodo-18A-P-con-tecnologia-RibbonFET-y-PowerVia-para-mejorar-la-eficiencia-en-IA.html
Microsoft is redrawing its silicon map. After using your accelerator for the first time Maia 100 at TSMC (N5 + CoWoS-S)the company has decided that Maia 2 will be born in Intel Foundry. It is not an anecdote: it is a movement with an impact on energy costs, supply capacity and geopolitics of computing. If the new 18A node (and its derivative 18A-P) delivers as promised, Microsoft will gain room to scale AI without its electricity bill skyrocketing. And Intel will have finally regained its credibility as a third-party foundry for top-level workloads.
Why Microsoft is making a move
The Maia 100 showed that Redmond can design its own competitive accelerators, but it also made clear the price of playing on the front line: An 820 mm² die, 64 GB of HBM2E at 1.8 TB/s, 500 MB of L1/L2 cache, and a TDP of 500 W (peaking at 700 W). The tensor performance accompanied (up to 3 PetaOPS at 6 bits, 1.5 PetaOPS at 9 bits and 0.8 PetaFLOPS on BF16), but so did the consumption and pressure on TSMC’s CoWoS chain. Microsoft needs more “cost per watt” control and capacity certainty as Azure AI and Copilot grow. Diversifying with Intel is, therefore, a business move rather than a marketing one.
What 18A/18A-P promises in an AI accelerator
Intel’s proposal combines RibbonFET (its GAA implementation) and PowerVia (wafer backside fed). The 18A-P version adds second generation of both, plus redesigned low-threshold devices, materials and geometries to reduce leakage, and refined belt width.
In practical terms: more useful frequency at the same voltage, less heat per operation and more area available for interconnection and memory. In a cluster with thousands of accelerators, a 10-15% improvement in performance per watt frees up megawatts at data center scale and enables more ambitious per-rack densities without throttling.
The report that puts the piece in its place
In the middle of this entire story the key confirmation appears. Charlie Demerjian (SemiAccurate) assures that Microsoft has already ordered Maia 2 from Intel Foundry in 18A or 18A-Pand that the project is large enough to act as a “test case” for Intel’s new foundry model.
The analyst details that the 18A-P integrates threshold, leakage and metallization changes aimed at squeezing performance/W, right where an accelerator lives or dies. It’s not just a symbolic tape-out: if it goes well, Microsoft will migrate subsequent generations (with options like 18A-PT and 14A) to the same house.
What changes compared to Maia 100
If 18A-P complies, Maia 2 should sustain its target frequency longer and reduce droop when the HBM fills or when the interconnection network is stressed. Thermal efficiency opens two doors: scaling nodes per rack without penalizing sustained brightness and limiting the need for extreme cooling solutions. It would be reasonable to expect more memory capacity and improvements in the internal communication mesh, because the latency between computation and HBM weighs as much as the raw TOPS.
What does Intel gain if it gets it right?
For Intel, Maia 2 is a public test. After years of stumbles at 10nm and 7nm, achieving real volume in a leading accelerator would validate its RibbonFET/PowerVia strategy and, above all, its promise of reliable schedules for external customers. The contract with Microsoft works as a seal of trust for other AI designers who today look to TSMC out of inertia.
What Microsoft gains if Intel delivers
Redmond obtains silicon sovereignty– Less dependence on saturated CoWoS windows, closer logistics chains, and room to co-design process adjustments to your architecture. On an economic level, a more efficient accelerator reduces OPEX per deployed model. Strategically, it gives leverage against NVIDIA and AMD when negotiating prices and roadmap. And, if the path to 18A-PT/14A materializes, Microsoft will be able to interact without rewriting its entire software stack, maintaining continuity in its data centers.
What to watch
The first risk is sustained performance: If the node does not deliver temperatures and leaks as promised, the advantage is diluted after ten minutes of loading. The second is packet and interconnect: HBM, substrate and networks must accompany so that the computation does not drown in memory. The third is manufacturing ramp: calendars and yields will determine whether we are talking about a few thousand pieces or Azure-scale deployment. And finally, software and compilers: Exploiting 18A-P well requires mature toolchains; The best transistor is of no use if the stack does not take advantage of it.
The ordering of Maia 2 from Intel Foundry is not a change of supplier, it is a commitment to efficiency as a competitive advantage. Microsoft looks to lower cost per watt as it scales its AI cloud; Intel wants to prove that its technology is once again on par with the leaders. If the 18A-P delivers, we will see accelerators with more sustained muscle and data centers less tied to thermals. If it fails, it will be another missed opportunity for Intel and a return to square one for Microsoft. The entire industry is looking at that first batch.
