Top 7 China AI Chip Companies 2025
China's AI chip market exceeded RMB 150 billion in 2025, driven by massive demand for AI training and inference infrastructure. US export controls on NVIDIA H100/A100 GPUs have accelerated China's push for domestic AI chip development, with Chinese companies designing increasingly capable alternatives. While still behind NVIDIA in peak performance and software ecosystem, domestic AI chips have achieved cost-effective deployment in inference workloads across cloud, edge, and autonomous driving applications.
TL;DR: China's AI chip market exceeds RMB 150B. Huawei Ascend leads domestic deployment with 100K+ Ascend chips in Chinese cloud data centers while Cambricon's MLU370 serves 50+ AI inference scenarios with 70% of NVIDIA's performance at half the cost.
Top Companies
Huawei Ascend (华为昇腾)
100K+ chips deployedHuawei's Ascend series (Ascend 910B for training, Ascend 310P for inference) is China's most widely deployed domestic AI chip, with 100K+ chips installed in Chinese cloud data centers. The Ascend ecosystem (CANN software stack, MindSpore framework) serves 3,000+ enterprise AI developers.
Cambricon (寒武纪)
AI chip IPO leaderCambricon develops the MLU series AI accelerators for inference and training. Its MLU370 achieves 70% of NVIDIA A100 inference performance at 50% cost, deployed in 50+ AI application scenarios. Cambricon's edge AI chips (Zhongkong series) are used in smartphones, smart cameras, and autonomous driving.
Biren Technology (壁仞科技)
BR100 GPU-class chipBiren Technology develops GPU-class AI training chips, with its BR100 chip achieving performance comparable to NVIDIA A100 in key benchmarks. Despite US sanctions blocking its TSMC manufacturing, Biren is exploring domestic foundry partnerships and has secured RMB 5B+ in total funding.
Moore Threads (摩尔线程)
Full-graphics GPUMoore Threads develops full-featured GPUs for both AI computing and graphics rendering, with its MTT S4000 and MTT S80 chips supporting AI inference, CUDA-compatible workloads, and display output. Its MUSA software platform provides compatibility with existing CUDA-based AI frameworks.
Enflame (燧原科技)
Cloud AI training chipEnflame develops cloud-scale AI training chips, with its Enflame T20 and T21 providing efficient AI training for large language models. Its chips are deployed in Tencent Cloud and other major Chinese cloud platforms, supporting distributed training across 1000+ chip clusters.
T-Head (平头哥)
Alibaba's AI chip armT-Head, Alibaba's semiconductor division, develops AI inference chips (Hanguang 800/800 series) optimized for Alibaba Cloud workloads. Its chips deliver 4x efficiency improvement for recommendation, search, and natural language processing workloads compared to GPU alternatives.
Baidu Kunlun (百度昆仑)
Search + AI optimizedBaidu's Kunlun chip is optimized for both AI inference and search workloads, with Kunlun II achieving 256 TOPS performance. It is deeply integrated into Baidu's AI infrastructure, powering ERNIE large language model inference and Baidu search ranking with millions of queries served daily.
Comparison Table
| Company | Key Chip | Peak Performance | Deployment | Software Ecosystem |
|---|---|---|---|---|
| Huawei Ascend | Ascend 910B | 320 TFLOPS FP16 | 100K+ chips | CANN + MindSpore |
| Cambricon | MLU370 | 256 TFLOPS FP16 | 50+ scenarios | Neuware |
| Biren Tech | BR100 | Comparable to A100 | Limited (sanctions) | BIRENSUPA |
| Moore Threads | MTT S4000 | GPU-class | Growing | MUSA (CUDA compat) |
| Enflame | T20/T21 | Cloud training | Tencent Cloud | Enflame SDK |
| T-Head | Hanguang 800 | Inference optimized | Alibaba Cloud | Custom |
| Baidu Kunlun | Kunlun II | 256 TOPS INT8 | Baidu internal | XPU SDK |
Frequently Asked Questions
How capable are Chinese AI chips compared to NVIDIA?
Huawei Ascend 910B achieves 60-70% of NVIDIA H100 training performance. Cambricon MLU370 matches 70% of A100 inference. For inference workloads, Chinese chips achieve 80-90% efficiency at 50% cost. For large-scale training, NVIDIA still leads in both performance and software ecosystem maturity.
How has US export control impacted China's AI chip industry?
US controls block NVIDIA H100/A100 and AMD MI300 exports to China, and restrict Biren/Moore Threads from accessing TSMC advanced nodes. This has: (1) boosted domestic chip demand, (2) created a thriving secondary market for smuggled chips, (3) accelerated domestic alternatives, and (4) pushed Chinese companies to optimize for mature-node manufacturing.
What is China's AI chip market share breakdown?
NVIDIA (including smuggled/downgraded chips) holds 60-70% of China's AI chip market. Huawei Ascend has 15-20% share in inference. Cambricon has 5-8% share. Other domestic players (Biren, Moore Threads, Enflame, T-Head, Kunlun) share the remaining 5-10%. Domestic share is growing 5-10 percentage points annually.
Can Chinese AI chips run large language models?
Yes, but with limitations. Huawei Ascend 910B clusters can train models up to 100B parameters. For trillion-parameter models, multiple limitations exist: memory bandwidth constraints, software stack immaturity, and inter-chip communication efficiency. Chinese cloud providers use hybrid clusters (domestic + limited NVIDIA) for LLM training.
What are the advantages of Chinese AI chips?
Key advantages include: (1) 50-70% lower cost per TOPS, (2) guaranteed supply without export restrictions, (3) customization for specific workloads (search, recommendation, inference), (4) government procurement preferences, and (5) growing software ecosystem through CANN, MUSA, and Neuware platforms.