What are China’s Generative AI Laws?
A Detailed Guide to Compliance
Overlapping laws passed since 2023
New, specific metadata requirements for all generated content
Protections for user rights and limitations on permitted content

FAQs About China’s Generative AI Measures
Steps to Achieve Compliance

High-Level Overview
What are the core concepts of the China Generative AI Measures?
1) China has passed a series of laws targeting AI generated content.
Unlike jurisdictions like South Korea or the European Union that chose to mostly incorporate all AI regulation into a single, comprehensive framework piece of legislation, China has taken a more piecemeal approach, with a number of laws regulating different AI types and use cases.
2) Emphasis on labeling, accuracy, and transparency for consumers and users
The bulk of the attention in rules and laws has been towards ensuring that Generative AI outputs can be easily identified by the public as such, and at preventing social disruption based on deepfakes, misinformation, or other deceptive GenAI outputs. The latest rules, coming into effect in September 2025, ensure GenAI content is obvious to both humans and machines.
3) Unique limits on content in accordance with Chinese law
As with the creation and dissemination of content through more usual methods in China, there are expectations and limits on the kinds of content deemed acceptable. Providers must ensure that their services “respect social mores and ethics,” as well as other considerations.
4) Consumers have rights to access, amend, and delete their data
As in other jurisdictions, China’s privacy laws have created certain rights for consumers whose data is being processed by software, including generative AI systems. Companies are obligated to set up channels for user reports, feedback, and requests to exercise these rights.
China Generative AI Measures Scope
What different laws cover generative AI in China?
China has passed a series of laws to regulate the use, availability, and content produced by generative AI systems.
The Provisions for the Administration of Deep Synthesis Internet Information Services was passed in January 2023, shortly after the public release of ChatGPT. These laws were designed primarily to regulate the threat of “deep synthesis” content that could be mistaken for human outputs, or be used to generate misinformation or deepfakes.
In August 2023, the Interim Measures for the Management of Generative AI Services came into effect. More substantial than the Deep Synthesis Provisions, these expanded and clarified the expectations on providers of GenAI services in terms of data governance, security, and monitoring. as well as establishing certain rights for consumers over how providers use their data. Finally,
In September 2025, the Measures for Labeling of AI-Generated Synthetic Content will come into force. For providers of generative AI, these rules specify the expectations for metadata labelling and other kinds of transparency measures. In turn, platforms where media can be shared such as social media sites will be required to read that metadata, alongside their own monitoring, and inform users when they’re interacting with content that is definitely or likely AI-generated.
Separately, China also has a provision in its Personal Information Protection Law that guarantees certain consumer rights, and requires handlers of personal data to conduct a personal information protection impact assessment. More about that law can be found in this guide.
How do these laws define “generative artificial intelligence”?
The Interim Measures for the Management of Generative AI Services defines generative AI as “models and relevant technologies that have the ability to generate content such as texts, images, audio, or video.”
Who do the China Generative AI laws apply to?
Because China’s rules governing generative AI have been passed in multiple rounds, there are some overlapping requirements. In general, most of the provisions in these laws fall on the providers of generative AI services to individuals outside of your organization, who must ensure that anything generated carries necessary metadata and complies with Chinese laws about content. In most cases, these will be developers, but they could also cover some deployers or distributors, depending on the context. Organizations should take care to clarify their own understanding of their role in the AI value chain.
Compliance Requirements
What are the compliance requirements of the Generative AI Measures?
Many aspects of the GenAI Measures require companies to implement processes or documentation similar to that found in other global regulations such as the EU AI Act, or the South Korea AI Basic Law. Basic governance requirements include:
- Training data documentation to ensure intellectual property is respected
- Testing plans during system development that include data quality/tagging assurance
- Data governance that respect personal information rights
- External reporting channels for users to make complaints and reports
One requirement unique to China is that any provider of generative AI services must “verify the real identity information of users” through a variety of possible means.
In addition to these processes, the new Labeling Measures have better defined how labeling must work, and the sorts of event logging necessary to enable it:
- Explicit labeling such as text notifications around or throughout a chatbot interaction, or audible notification during any AI-generated audio
- Implicit labeling in the metadata of any AI-generated files, including the name of the provider and a reference number
- User service agreements that include an explanation of all labeling
- Event logging must make note of any requests by users to generate content without required metadata tags, and must be kept for six months
Non-Compliance Penalties
What are the non-compliance penalties for violating the Generative AI Measures?
Depending on the severity of the violation and if other laws are also applicable, violators of the GenAI Measures may be subject to a warning, suspension of services, or possible referral to the public security authorities. Related laws carry heavy fines–violations of the Personal Information Protection Law can cost 50 million RMB (approx. 7 million USD).
Status
When do the Measures go into effect?
The Deep Synthesis Provisions and the Interim Measures for GenAI are already in effect. The new Labeling Measures will be effective as of September 1, 2025.
Steps To Compliance
How can organizations ensure compliance with the Generative AI Measures?
Drawing from our extensive work in AI governance and compliance, we’ve identified the following best practices to ensure compliance:
- Adopt an AI Governance or Risk Management program. Although specific requirements differ across jurisdictions, the basic principles in frameworks such as the NIST AI RMF or ISO 42001 will be broadly useful around the world.
- Build an inventory of your AI applications, starting with a risk assessment that can help determine whether your products and services will qualify you as as provider and therefore be subject to greater scrutiny and documentation requirements.
- Begin standardizing documentation such as model cards, testing protocols, and impact assessments. Even if most systems your organization currently deploys do not fall under the requirements of the current regulations, others are likely coming that will impact your work.
Staying informed and engaged will be key to ensuring compliance with the Generative AI Measures and any future laws on AI in China.
AI Compliance Tools
How FairNow’s AI Governance Platform Helps
Developed by specialists in AI risk management, testing and compliance, FairNow’s AI Governance Platform is tailored to tackle the unique challenges of AI risk management. FairNow provides:
- Streamlined compliance processes, reducing reporting times
- Centralized AI inventory management with intelligent risk assessment
- Clear accountability frameworks and human oversight integration
- Ongoing testing and monitoring tools
- Efficient regulation tracking and comprehensive compliance documentation
FairNow enables organizations to ensure transparency, reliability, and unbiased AI usage, all while simplifying their compliance journey.
Experience how our industry-informed platform can simplify AI governance.
AI compliance doesn't have to be so complicated.
Use FairNow's AI governance platform to:

Effortlessly ensure your AI is in harmony with both current and upcoming regulations

Ensure that your AI is fair and reliable using our proprietary testing suite
