China AI Regulation 2025: Laws, Policies, and Compliance Guide

China has established one of the world's most comprehensive AI regulatory frameworks, with the AI Safety Law (effective 2025) serving as the foundational legislation alongside specific regulations for generative AI, algorithmic recommendations, and deep synthesis (deepfakes). The Cyberspace Administration of China (CAC) oversees AI governance, requiring companies to register AI models, implement content safety measures, and ensure algorithmic transparency. Understanding China's AI regulatory landscape is essential for any company operating AI systems in the Chinese market.

TL;DR

China's AI regulatory framework in 2025 centers on the AI Safety Law, which mandates risk classification, algorithmic registration, and safety assessments for all AI systems. Over 1.2 million algorithm registrations have been filed with the CAC. Generative AI providers must obtain licenses and implement content safety systems. Non-compliance penalties can reach 10 million RMB or suspension of services.

Key Insights

Algorithm Registrations

1.2M+

Over 1.2 million algorithm registrations have been filed with the CAC since the algorithm recommendation regulation took effect. Companies including ByteDance, Tencent, Alibaba, Kuaishou, and Baidu have registered thousands of algorithms for content recommendation, search ranking, and user profiling.

AI Safety Law Penalties

10M RMB max

The AI Safety Law sets maximum penalties of 10 million RMB for violations, including failure to conduct safety assessments, deploying unregistered AI models, or generating harmful content. Repeat offenders face suspension or revocation of operating licenses.

Generative AI Licensees

200+

Over 200 companies have obtained generative AI service licenses from the CAC, including major tech companies (Baidu, Alibaba, Tencent, ByteDance) and numerous AI startups. Licensed models must pass content safety evaluations and implement real-time content filtering.

AI Companies Covered

100,000+

China's AI regulations cover over 100,000 companies using AI systems, from large tech platforms to SMEs deploying AI for business operations. The regulations apply to AI systems that interact with the public, influence information dissemination, or make automated decisions affecting user rights.

Side-by-Side Comparison

AspectChinaEU (AI Act)United StatesUK
Primary LawAI Safety Law (2025)EU AI Act (2024)Executive OrdersPro-innovation framework
Risk Classification3-tier mandatory4-tier risk-basedVoluntaryPrinciples-based
GenAI RulesSpecific license requiredTransparency obligationsNo specific rulesVoluntary code
Algorithm RegistrationMandatory (CAC)Not requiredNot requiredNot required
Data Training RequirementsTraining data disclosureCopyright complianceVoluntaryVoluntary
Deepfake RegulationMandatory labelingTransparency requiredState-level rulesVoluntary
Max Penalties10M RMB35M EUR / 7% revenueNo federal penaltiesExisting laws
EnforcementCAC + MIIT + SAMRNational authoritiesFTC (limited)ICO + FCA

Frequently Asked Questions

What are the key requirements of China's AI Safety Law?

China's AI Safety Law (effective January 2025) establishes the following key requirements: Risk classification where all AI systems must be classified into three risk tiers (high, medium, low) based on their application domain and potential impact on public safety and individual rights. Mandatory safety assessments are required for high-risk and medium-risk AI systems before deployment, including technical testing, ethical review, and impact assessments. Algorithmic transparency where companies must register algorithms with the CAC and provide explanations of how AI systems make decisions affecting user rights. Content safety where generative AI systems must implement real-time content filtering and refuse to generate content that violates Chinese laws, endangers national security, or spreads misinformation. Data governance where training data must be sourced legally, with requirements for data quality, diversity, and bias testing. Human oversight where high-risk AI applications must include human review mechanisms, allowing users to contest automated decisions. Companies must also establish internal AI governance committees, designate responsible persons, and maintain compliance records for at least three years.

How do foreign AI companies comply with China's AI regulations?

Foreign AI companies operating in China must navigate several compliance requirements: they must register AI algorithms and models with the CAC through a local legal entity or authorized representative. All AI-generated content must comply with Chinese content standards, which differ significantly from Western free speech norms. Training data for models operating in China must be stored on domestic servers and comply with China's data security and personal information protection laws. Generative AI providers must obtain a CAC license, which requires demonstrating content safety capabilities and establishing local moderation teams. Companies must appoint a domestic responsible person and maintain a local compliance team. Many foreign companies partner with Chinese firms to navigate regulatory requirements, though this creates data governance complexities. Notably, China's regulations apply extraterritorially to AI systems that target or influence Chinese users, even if the systems are operated from outside China. The practical effect is that most global AI companies either establish dedicated China compliance teams or limit the availability of their AI services in the Chinese market.