China intends to lift 39 data centers with more than 115,000 GPU NVIDIA H100 and H200 in Xinjiang despite the US veto.

China intends to lift 39 data centers with more than 115,000 GPU NVIDIA H100 and H200 in Xinjiang despite the US veto.

While the United States limits the export of GPUS NVIDIA H100 and H200, China’s local authorities (especially in Xinjiang and Qinghai9 bet on boost your artificial intelligence data centers. Almost 39 Large facilities They are on their way, and one of them, in Yiwu County (Xinjiang), will host some impressive 80,500 GPU. If everything goes ahead, we would be facing one of the most powerful AI infrastructure.

Gigacluster in Yiwu and regional expansion

The main data center will be huge: some 80,500 GPU H100/H200 concentrated in Yiwu will be around 70 % of the capacity raised. The rest would be distributed in some 38 additional facilities distributed by Xinjiang and Qinghai. Among them stands out Nyocor projectwhich plans to start with 250 H100 DGX servers (2,000 GPU)simultaneously future phases, with a total of 5,000 more units.

The magnitude of this deployment does not go unnoticed, especially if we compare it: Elon Musk needed about 100,000 h100 to train Grok 3; Deepseek, in its R1 model, used 50,000 GPU Hopper (30,000 H200, 10,000 H800, 10,000 H100). The simple size of these Chinese projects places them at the height of the most ambitious in the sector.

The data hole in China

The Chinese State controls many of the entities involved. Have 80,000 gpus dedicated to AI It would place China among world top. The ability to train big Language models (LLM) or reasoning models (LRM) He would be covered well, clearly expanding his artificial intelligence infrastructure.

According to government sources, Xinjiang already has a center that delivers about 24,000 petaflops (approximately 12,000 h100). The Authorities offer incentives such as 80% of the usual price and housing facilities to attract technological talent.

How do those GPUS arrive?

Here the story becomes complicated. To cover the 115,000 units planned more than 14,000 servers equipped with H100/H200 would be needed. In theory, H200 HGX could also use, but tenders explicitly specify H100, which makes it clear that they look for maximum performance and memory capacity.

Nvidia acknowledges that there are no signs of mass shipments to China. The official figures of the USA Distributed throughout the country, far from the needs of these projects. It is suspected that the chips are introduced by discrete roads (Singapore, Malaysia) and then cross to China, but there are still no direct evidence that confirms it.

As it seems, in Yiwu a Great solar tower to ensure a stable energy source. The area choice is not accidental: cheap land, altitude that helps to refrigerate equipment, in addition to solar and wind access. It is a design designed to maintain natural refrigeration and energy stability, keys to the uninterrupted functioning of a large -scale AI center.

A blind strategy?

Nvidia warns: Mounting an AI center without official support (drivers, firmware, optimizations) implies losing efficiency in performance and energy consumption.

But it seems that the Chinese authorities are willing to assume that cost if the objective is to have their own infrastructure. They can resign themselves to a penalty in performance in order to avoid depending on Western suppliers.

The future of Chinese infrastructure

There is no confirmation that the 115,000 GPUs are really on their way, or that the projects are seriously, but Physical investment in soils, energy and connections suggests it. In the midst of geopolitical restrictions and tensions, this massive impulse speaks of a long -term strategic commitment: to build its own computer power base and not be left behind in AI’s career.

The next few weeks will be key: if the GPUS arrive, if training begins and if technical reports emerge, it will be time to verify if this is seriously … or if it is simply a great promise on paper.