Home 5 FairNow Blog 5 Accelerating Federal AI: New OMB AI Memo 2025 Balances Innovation with Responsibility

Accelerating Federal AI: New OMB AI Memo 2025 Balances Innovation with Responsibility

Apr 8, 2025 | FairNow Blog

By Stephen Jordan
Model Risk Management

On April 3, 2025, the Office of Management and Budget (OMB) released a new memorandum that seeks to accelerate AI adoption by Federal agencies in a way that protects civil rights, civil liberties, and privacy. The document – titled “Accelerating Federal Use of AI through Innovation, Governance, and Public Trust” (M-25-21) – builds on the pro-innovation Executive Order 14179 from January and replaces last year’s OMB memo M-24-10 mandating responsible AI practices for Federal Agencies.

A Risk-Based Approach

The order takes a risk-based approach and requires oversight for high-impact AI: models whose output is used as a “principal basis” in decisions that can materially impact an individual’s rights, safety or access to critical government services. This focus is in line with the current trend of AI regulation in the US, which targets the responsible use of AI in decisions with meaningful human impact. 

The definition of “high-impact” covers AI that affects civil rights and liberties, access to education, housing, insurance, credit, employment, and government programs and services, as well as systems that could affect human health, safety, critical infrastructure, and strategic assets. 

The memo explains which categories of AI uses that are presumed to be high-impact. This list ranges from critical infrastructure control systems to healthcare applications, law enforcement tools, and federal employment decisions. These use cases are largely consistent with the categories deemed high-risk in laws like the Colorado AI Act.

Risk Management Practices

High-impact AI systems must undergo pre-deployment testing (including independent review), impact assessments, regular monitoring and human oversight. 

These requirements include:

  • Conducting pre-deployment testing and preparing risk mitigation plans that reflect anticipated real-world outcomes.
  • Completing AI impact assessments that document purpose, data quality, potential impacts, reassessment schedules, costs, and independent review.
  • Implementing ongoing monitoring to identify adverse impacts to performance and potential security issues.
  • Ensuring adequate human training and assessment for AI operators.
  • Providing appropriate human oversight, intervention and accountability mechanisms.
  • Offering consistent remedies or appeals for individuals affected by AI-enabled decisions.
  • Collecting and incorporating user feedback in the design and development process.

New Governance Structure

To achieve accelerated but safe AI adoption, agencies must set up appropriate governance mechanisms. These include identifying a Chief AI Officer, establishing an AI Governance Board and developing an AI strategy.

The Chief AI Officer will be tasked with championing their agency’s AI agenda and making the appropriate changes to accomplish its goals. This person will also coordinate with other agencies to synergize AI development and operations efforts. Each agency must identify a qualified person to fill this role within 60 days of this memorandum’s publication.

The AI Governance Board is responsible for empowering agency AI leaders, developing agency policies, maintaining the agency’s AI inventory and ensuring compliance with this memo. The Board must be convened within 90 days and include sufficient representation from the agency’s key disciplines.

Within 180 days, the agency must develop an AI Strategy that details how the agency will cut red tape and scale the adoption and maturity of its AI applications.

Transparency and Accountability Mechanisms

Emphasizing the government’s accountability to taxpayers, transparency and accountability are driving factors behind the memo. Agencies must inventory and publish details about their usage of AI. This includes documenting high-impact AI use cases and keeping this inventory up-to-date. The agency’s AI strategy must be made publicly available as well.

Agencies must share data and AI assets (including code and model weights) across agencies to drive efficiency and save taxpayer dollars.

The open source ecosystem may benefit as well: Agencies are also encouraged to release and maintain AI code as open source software in public repositories whenever practicable.

The memo provides guidance for public consultation and feedback. It recommends usability testing, soliciting public comments, post-transaction feedback and public hearings. 

A Shift in Tone, Not Fundamentals

Most notably, this EO marks a shift in tone but not in fundamentals. In tone, this Order places more emphasis on innovation and efficiency than the Biden administration’s Order. Agencies are directed to remove barriers, empower their staff and improve services in a way that delivers taxpayer value. 

But where many observers thought Federal responsible AI efforts would be squashed by the Trump administration, those same principles (like safety, fairness, accountability) are still the motivating factors. The priorities may have shifted, but the foundation is still the same. 

    We’re here to help

    FairNow is on a mission to simplify, streamline, and automate AI governance at scale.

    AI Compliance, Decoded—Our Platform Guides You Every Step of the Way


    Learn more about how FairNow’s AI compliance software guides your every step of the way.

    Reach out for a free consultation or demo today.

    About Stephen Jordan

    About Stephen Jordan

    Explore the leading AI governance platform