US NIST AI Risk Management Framework


Section 1 - Description


Section 2 - Scope


Section 3 - Requirements


Section 4 - Penalties

Section 1


On Jan 26, 2023, NIST released version 1.0 of the AI Risk Management Framework (AI RMF). This framework is intended for voluntary use and is being developed to incorporate trustworthiness considerations into the development and use of AI while managing risks effectively. The framework arose out of a directive from Congress to develop such a framework and was produced via collaboration with both the private and public sectors.

It’s notable that the RMF is a voluntary exercise as opposed to the EU AI Act or Canada’s AIDA. Currently there is no proposal for a national AI regulation law in the US.

Section 2


The framework is voluntary and intended for any actor who plays a part in the design, development, use or maintenance of AI systems and wishes to increase trustworthiness and responsibility into their systems.

Section 3


The RMF is broken down into two parts.

The first focuses on framing the risks that an AI system poses by understanding the intended audience and the potential harms such a system could cause.

The second comprises the “Core” of the Framework and is in turn organized around four key functions: 

  1. Govern details the policies and procedures an organization should have. By adopting these, organizations will develop the means and knowledge to effectively manage AI risks.
  2. Map offers the ability to identify and frame AI risks in the context in which it is developed and used. The steps in this function allow users to make informed decisions about whether to proceed with designing, developing or deploying an AI system.
  3. Measure uses metrics and methodologies to read, analyze and monitor AI risks and impacts. Measuring AI risks gives organizations the means to understand the performance, trustworthiness and social impact of their systems over time, allowing them to know when AI systems may pose unacceptable risks.
  4. Manage describes how risk resources should be allocated to risks. This function relies on the previous three functions to identify and measure risks, and to decide how to use risk management resources effectively. 

Section 4


No penalties, this is purely voluntary and there is no stated plan for this to become regulation.

Get Expert Help With AI Governance

Schedule a free consultation with our AI governance experts today.