Home 5 FairNow Blog 5 AI Regulations Update H2 2025

AI Regulations Update H2 2025

Sep 3, 2025 | FairNow Blog

By Tyler Lawrence
Image for Global AI Regulations Update H2 2025 Roundup

What They Mean for Your AI Systems and Compliance Strategy

AI regulation updates in H2 2025 continue to gain clarity, with major provisions now in effect across the EU, China, and U.S. states such as California. At the same time, the sheer volume and variety of individual laws make compliance increasingly complex for organizations deploying or developing AI systems. This roundup highlights the most significant updates, clarifies what’s already in force, and explains how these developments shape your compliance plans and governance strategy.

🇪🇺 European Union AI Act – General-Purpose AI (GPAI)

The rollout of the EU AI Act’s requirements in stages has ended up generating some degree of confusion. The requirements to train employees with AI literacy and the prohibited use case provisions are already active as of this February.

However, months of negotiations with the major developers over the General-Purpose AI Code of Practice prompted a broader, very public conversation about whether or not to delay the rest of the law’s provisions. Ultimately, the GPAI rules went into effect in August. The vast majority of AI developers and deployers are not impacted by the GPAI rules, which are aimed squarely at the largest and most sophisticated GenAI models.

The EU has made it clear that there will not be any change in the timeline, and so the requirements for most developers and deployers to implement transparency measures, and the extensive requirements on high-risk AI systems, will go into effect on August 2, 2026.

Learn more about the EU AI Act’s requirements →

🇺🇸 United States – No National Pause

During the protracted negotiations on Capitol Hill over the Trump Administration’s first budget, various proposals to outright ban or sharply limit the regulation of AI at the state level were considered. Ultimately, no ban or moratorium was put into place, at least for now.

That said, the Trump Administration’s AI Action Plan released in July still nodded to the idea that any investment or grants to the states by the federal government designed to spur AI innovation should “consider a state’s AI regulatory climate when making funding decisions.” As such, some states may decide to pause or scale back efforts to regulate AI in pursuit of federal funding.

🇨🇳 China – Labeling Measures

Unlike the EU’s approach of passing a single, sweeping AI regulation, China has slowly unveiled several laws governing automated-decision making and generative AI over the last several years. Each new law has refined and added specificity to the overall set of rules and expectations.

The latest “Measures for Labeling of AI-Generated Synthetic Content” went into effect on September 1. The Labeling Measures build on previous laws, and generally cover any company providing AI services outside of their own organization. Previous laws established requirements for basic governance practices such as testing plans, data governance measures, and reporting channels. Now, to comply with the Labeling Measures providers must:

  • Ensure all GenAI output contains both “explicit” labeling (visual watermark, text notifications) and required “implicit” metadata labeling in a standard format
  • Explain labeling in user service agreements
  • Compile event logs of any requests by users to generate content without explicit or implicit labels

Companies doing business in China are advised to consult with local teams and experts to ensure that they have properly implemented these requirements.

China Generative AI Labeling Measures →

🐻 California – Automated Decision Systems

Compared to these major international laws, some state-level rules have flown a bit under the radar. Potentially the most important will come into effect in California on October 1, covering how automated decision systems (ADS) may be used in employment contexts. These rules came from the California Civil Rights Division, providing guidance on how the existing Fair Employment and Housing Act interacts with these new technologies.

Deployers (in this case, usually companies’ HR teams or their contractors) now have to comply with longer recordkeeping requirements for ADSs. Additionally, since California civil rights law protects many classes not covered in other parts of the country such as accent or certain disabilities, companies should be extra aware of the need for accommodations or alternatives to certain AI systems.

Most importantly, while the new rules don’t require bias testing of ADS tools, it is very strongly incentivized. Companies that can point to pre-deployment bias testing and risk management will have a strong affirmative defense in the event of a discrimination claim.

Learn more about California’s ADS rules →

🇺🇸 Other States

Although numerous states considered passing new AI laws this year, only a handful so far have passed laws likely to have broad impacts. Several, such as Illinois, have passed laws restricting the use of generative AI specifically in mental health contexts, an application that has already caused significant public debate and controversy.

Several others have made tweaks to laws already on the books. The Utah AI Policy Act was amended to apply to a smaller number of AI systems, requiring upfront disclosures to consumers in certain high-risk interactions and proactive disclosure if asked by any GenAI system. After the legislature could not agree on amendments to the Colorado AI Act, they simply delayed its implementation from February 1 to June 30 of next year.

Several other states have passed modest transparency bills. Although the original draft of the Texas Responsible AI Governance Act was much more sweeping, the final version focused on restricting government use of AI and requiring disclosures to consumers in healthcare or government contexts. Maine HB1154 now requires all chatbots to disclose that they are AIs to consumers.

See these bills and more on our regulatory tracker →

What’s Next – Beyond AI Regulation Updates H2 2025

FairNow continues to watch rulemaking and legislative developments around the world to keep our users informed and our platform’s recommendations up to date. As of now, the United Kingdom and Brazil both have broad AI regulatory frameworks moving through their legislative process. In the U.S., California still has a number of major AI bills under consideration, including bills to further limit how deployers may use ADS in employment, the use of AI in employee surveillance, and AI pricing tools.

AI regulation updates in H2 2025 offer both greater clarity and greater complexity. Provisions such as China’s labeling requirements and California’s ADS rules are now active, while broader frameworks like the EU AI Act remain on a fixed path toward 2026. For organizations, the challenge is no longer awareness—it is coordination. Seeking clarity on AI regulations and governance? Discover how our advisory team can help you translate these updates into a roadmap for your systems and compliance strategy.

Learn more →

Related Articles

FAQs Around AI Regulation Updates (H2 2025)

Why are AI regulatory updates important for my organization?

AI regulatory updates create new legal and compliance obligations that organizations must follow to avoid fines or lawsuits. They also clarify expectations for responsible AI deployment, especially in high-risk areas like HR, finance, and healthcare. Understanding these updates ensures businesses can align policies, conduct testing, and demonstrate accountability to stakeholders.

How does the EU AI Act affect companies outside Europe?

Even if a company is not based in the EU, it must comply if it offers AI systems or services within the region. The Act’s requirements on high-risk AI, transparency, and documentation apply to both EU-based and external providers. This extraterritorial scope means global organizations must assess their systems for EU exposure and prepare compliance strategies accordingly.

Do U.S. organizations need to comply with China’s labeling rules?

Yes, if they provide AI services accessible in China. The law applies regardless of company headquarters. Providers must implement both explicit and metadata labels for generative AI outputs, update service agreements, and maintain event logs. Non-compliance could lead to enforcement or reputational damage in the Chinese market.

What risks do U.S. employers face under California’s Automated Decision Systems (ADS) rules?

Employers using ADS in hiring or employment decisions face higher scrutiny under California law. Failure to maintain extended records or provide accommodations could expose companies to legal challenges. While bias testing is optional, those that conduct it proactively will have a stronger defense in discrimination claims, making it a best practice for risk management.

How should organizations prepare for future AI regulations?

Start with a comprehensive AI inventory to identify where AI is used across the enterprise. Conduct risk assessments, bias testing, and update governance policies to align with frameworks like ISO 42001. Building documentation and transparency practices now ensures readiness for upcoming deadlines in 2026 and strengthens trust with regulators, employees, and customers.

About Tyler Lawrence

About Tyler Lawrence

Tyler Lawrence serves as the head of AI Policy for FairNow. In this role, he follows developments in AI standards, regulations and expectations around the world, ensuring that FairNow can guide organizations to success as they seek to leverage responsible AI. He has spent his career helping businesses to achieve ease and excellence in governance, risk and compliance, building software products and writing guidance used by hundreds of organizations. His goal is to empower teams with smooth processes and software that integrates seamlessly with their existing people and systems.

Explore the leading AI governance platform