Home 5 FairNow Blog 5 Where Is The Artificial Intelligence Regulations Landscape Going?

Where Is The Artificial Intelligence Regulations Landscape Going?

May 29, 2025 | FairNow Blog

By Guru Sethupathy
Artificial Intelligence Regulations in 2025: 4 Trends to Track

Over the past 18 months, governments at the local, state, and national levels have passed dozens of enforceable AI laws in the U.S. and globally. You can review a full list of Artificial Intelligence regulations, regulations, but it’s just the tip of the iceberg—many more bills are currently advancing through legislative pipelines across states and countries.

While these laws carry legal weight, companies must also pay close attention to emerging standards that guide responsible AI development. Although voluntary, frameworks like ISO 42001 (for Vendors) and NIST AI Risk Management Framework-  (Employers) are becoming essential references for safe, fair, and transparent AI practices

Standards help organizations operationalize legal requirements. For instance, implementing ISO 42001 may demonstrate compliance with mandates that call for formal AI risk management systems. Even without regulatory pressure, companies increasingly adopt these frameworks to signal trustworthiness to regulators, customers, and other stakeholders.

As AI regulations and standards continue to evolve in 2025 and beyond, businesses need a clear understanding of how these changes will affect their development and deployment strategies. Most frameworks now take a risk-based approach, imposing stricter compliance expectations on high-risk domains such as healthcare, financial services, and HR—and on use cases like hiring or credit decisions. If your organization is building or deploying AI, it’s critical to determine whether your use case falls into one of these high-risk categories.

With that in mind, let’s explore a few more key factors shaping the future of AI law and standards.

Developer Vs. Deployer

A critical question in AI regulation is where accountability should sit along the AI supply chain—specifically, whether lawmakers should focus on those who build AI systems or those who use them.

One regulatory strategy places responsibility on the developers. Under this model, developers must demonstrate the safety, fairness, and intended use of an AI system before it enters the market. The EU AI Act. follows this approach, requiring builders to meet strict pre-deployment obligations.

At the other end of the spectrum, some regulations target the deployers—the companies or institutions that apply AI in real-world settings. Advocates for this model argue that deployers, not developers, control how and where an AI system is used. Because they determine the context, they should bear responsibility for ensuring that usage remains safe and compliant.

Emerging laws like the Colorado AI Act are taking a hybrid approach—placing responsibilities on both builders and users. Developers are expected to assess risks, define safe usage parameters, and disclose limitations. Deployers, in turn, must follow those guidelines, monitor the AI’s performance, and intervene if risks emerge. This shared-responsibility model reflects a more practical understanding of how AI operates within complex systems and real-world workflows.

Image promoting getting Alerts on AI Regulations

Predictions For The Evolution Of Artificial Intelligence Regulations And Standards

Based on the developments so far, I have four predictions for the next few years when it comes to the AI regulatory landscape:

1. U.S. federal guardrails around the development and deployment of AI will weaken.

The Trump administration is expected to roll back federal efforts on AI governance and to repeal President Biden’s 2023 Executive Order on responsible AI. The order mandated certain government agencies to set safety guardrails, established new oversight offices and recommended AI developers to follow safety practices.

Additionally, the OMB memorandum on AI governance could also be struck down, which would have set expectations for responsible AI practices for all government agencies and their contractors. Since January 31, 2025, President Trump issued a new 2025 OMB memoradum on AI governance — specifically focused on guidelines for Federal Agency use of AI.

2. Artificial Intelligence regulations will continue at the state and local levels.

Even if the federal government deprioritizes protection against bias and other consumer harms, states and local governments will likely fill in the gaps. Several such bills have passed, like NYC Local Law 144 and the Colorado AI Act. Numerous states—including New York, California and Texas—have bills that are at various stages of passing and have some common themes.

These require risk management practices, especially focused on mitigating bias. They also require assessments of AI’s impact on individuals and technical documentation, as well as consumer transparency provisions.

3. AI will be added to existing industry regulations and conformity assessments.

Along those lines, I expect that many industries will embed AI-specific requirements in existing industry regulations and/or product conformity assessments. Here are a few examples:

• The EEOC has clarified that the use of AI is still subject to antidiscrimination laws and that the users of AI are responsible for discrimination in hiring, even when using a model from a vendor.

• In healthcare, AI-driven diagnostic tools may need to comply with existing regulations such as the FDA’s requirements for medical devices to ensure safety and efficacy. The agency has released guidelines and principles to shape the development and use of the technology.

• The General Product Safety Regulation (GPSR), the EU’s product safety regulation, has been recently updated to encompass AI. Under the revised version, AI used in a physical product or AI that could impact consumer safety would be required to follow risk management practices.

4. Market standards will grow in importance.

Trust is still a prerequisite for the successful adoption of AI.

Even when it’s not a legal requirement, companies will still care about building that trust. Management will want to reduce liabilities and incidents from the misuse of AI. Customers and other stakeholders will expect AI technology vendors to demonstrate that their technology is safe and reliable.

Standards like NIST AI RMF and ISO 42001 can help fill this gap. By establishing common guidelines for accountability, fairness, transparency and reliability, these standards enable organizations to demonstrate that AI is developed and deployed with appropriate risk controls and ethical considerations in mind.

Conclusion

While still in the early stages, the AI regulatory landscape and expectations are starting to take shape. The complexity will be fairly high and companies that invest in AI governance early in their AI journey will be at an advantage.

    NOTE: Originally Posted on January 31, 2025 in Forbes

    About Guru Sethupathy

    About Guru Sethupathy

    Guru Sethupathy has spent over 15 years immersed in AI governance, from his academic pursuits at Columbia and advisory role at McKinsey to his executive leadership at Capital One and the founding of FairNow. When he’s not thinking about responsible AI, you can find him on the tennis court, just narrowly escaping defeat at the hands of his two daughters. Learn more on LinkedIn at https://www.linkedin.com/in/guru-sethupathy/

    Explore the leading AI governance platform