Top 7 China AI Data Center & Computing Companies 2025
China's AI data center market has experienced explosive growth, driven by demand for large language model training and inference. The country has invested over RMB 200 billion in building intelligent computing centers (智算中心) across 30+ cities, creating a massive GPU computing infrastructure that supports domestic LLM development, autonomous driving training, and scientific computing. By 2025, China's total AI computing capacity exceeds 300 EFLOPS.
TL;DR: China's AI data center market is dominated by cloud giants (Alibaba, Tencent, Huawei) and specialized chip companies (Cambricon, Moore Threads). The government has designated 30+ cities for intelligent computing center construction, with total investment exceeding RMB 200 billion.
Top Companies
Alibaba Cloud Intelligent Computing
1,000+ EFLOPS capacityAlibaba operates China's largest AI computing platform with over 100,000 GPUs including NVIDIA A100/H100 and domestic Hanguang chips. Its ModelScope platform provides pre-trained AI models and computing resources to 2 million developers.
Huawei Ascend Computing
50+ city deploymentsHuawei's Ascend AI computing platform has been deployed in 50+ cities, with the Ascend 910C chip achieving performance comparable to NVIDIA A100. Huawei partners with local governments to build city-level intelligent computing centers.
Cambricon (寒武纪)
500+ customersCambricon is China's leading AI chip designer, with its SiYuan series chips deployed in data centers nationwide. The latest SiYuan 590 achieves 300 TOPS INT8 inference performance, serving major internet companies and government AI projects.
China Telecom AI Computing
100+ PUE centersChina Telecom has built 100+ green AI data centers with average PUE below 1.25, using liquid cooling and renewable energy. Its 天翼云 AI platform provides over 200 EFLOPS for government and enterprise AI applications.
Moore Threads (摩尔线程)
Full-stack GPUMoore Threads develops China's only full-stack GPU platform compatible with NVIDIA CUDA ecosystem. Its MTT S4000 data center GPU delivers 100 TFLOPS FP32 performance and is being adopted in domestic AI training clusters as a CUDA alternative.
Enflame (燧原科技)
Cloud-native AI chipsEnflame specializes in cloud-native AI training chips, with its T20 series achieving 500 TFLOPS FP16. The company partners with Tencent Cloud and has deployed thousands of chips in large-scale LLM training clusters.
Baidu AI Cloud (度小满)
500+ enterprise clientsBaidu's AI Cloud provides PaddlePaddle deep learning framework and ERNIE model inference services to 500+ enterprises. Its AI computing infrastructure supports billion-parameter model training with distributed GPU clusters across 10 data centers.
Comparison Table
| Company | AI Computing Capacity | Key Hardware | Green PUE | Deployment Scale |
|---|---|---|---|---|
| Alibaba Cloud | 1,000+ EFLOPS | H100 + Hanguang | <1.20 | Global, 30+ regions |
| Huawei Ascend | 500+ EFLOPS | Ascend 910C | <1.25 | 50+ cities |
| Cambricon | 100+ EFLOPS | SiYuan 590 | N/A (chip) | 500+ customers |
| China Telecom | 200+ EFLOPS | NVIDIA + domestic | <1.25 | 100+ data centers |
| Moore Threads | 50+ EFLOPS | MTT S4000 | <1.30 | 20+ clusters |
| Enflame | 80+ EFLOPS | T20 series | <1.25 | Tencent Cloud |
| Baidu AI Cloud | 150+ EFLOPS | XPU + GPU | <1.20 | 10 data centers |
Frequently Asked Questions
What is an intelligent computing center (智算中心)?
An intelligent computing center is a data center specifically designed for AI workloads, equipped with high-density GPU/accelerator clusters, high-bandwidth networking, and liquid cooling systems. China has designated 30+ cities for building these centers to support LLM training and AI inference.
How much computing power does China have for AI?
China's total AI computing capacity exceeds 300 EFLOPS (exa-FLOPS) as of 2025, distributed across cloud providers, telecom operators, and government-backed intelligent computing centers. The government targets 1,000 EFLOPS by 2030.
What chips power China's AI data centers?
China's AI data centers use a mix of NVIDIA GPUs (A100, H100 for those imported before 2023 bans), Huawei Ascend 910C, Cambricon SiYuan 590, Baidu XPU, and emerging domestic GPUs from Moore Threads and Enflame. The push for import substitution has accelerated domestic chip adoption.
How do Chinese AI data centers achieve energy efficiency?
Leading Chinese AI data centers achieve PUE below 1.20 through liquid cooling (direct-to-chip and immersion), AI-optimized power management, renewable energy procurement, and waste heat recovery. China Telecom and Alibaba Cloud lead in green data center innovation.
What is the cost of AI computing in China?
AI computing costs in China range from RMB 8-15 per GPU-hour for mainstream inference workloads, and RMB 20-50 per GPU-hour for large-scale training. Domestic chip options (Ascend, Cambricon) offer 30-50% cost savings over imported NVIDIA GPUs.