Key Takeaways From the UK’s “AI Assurance” Guide (Explained in Plain Language)
Six Things to Know about the “AI Assurance” Guide
- On February 12th, 2024, the UK’s released the “Introduction to AI assurance”
- In their words, “AI assurance processes can help to build confidence in AI systems by measuring and evaluating reliable, standardised, and accessible evidence about the capabilities of these systems.”
- The UK’s approach to AI regulation emphasizes five main categories: safety, transparency, fairness, accountability, and contestability (detailed below).
- Interestingly, the UK’s approach appears to be “application-specific” regulation, unlike the EU AI Act, which is more centralized.
- The authors emphasize that this approach is “pro-innovation” and does not strangle advancement and the positive aspects of AI.
- Sector-specific guidance will evolve, highlighting the dynamic nature of AI regulation and assurance.
Explain It Like I’m Five
- Imagine you have a robot friend who helps you with your homework. Now, before you can trust this robot to help you, you need to make sure it’s really good at its job and won’t make mistakes.
- AI assurance is like a robot check-up. It’s how grown-ups make sure the robot is working right, being nice and fair to everyone, and can explain how it does your homework.
- In the UK, there are special rules for robot check-ups, and the rules are written by the UK’s Department for Science, Innovation & Technology.
What Is the UK’s “Introduction to AI assurance”?
On February 12th, 2024, the UK’s Department for Science, Innovation & Technology released the first post in a series of guidance on AI assurance.
The guide, titled “Introduction to AI assurance” (very creative!), explains the concept of AI assurance, its role in effective AI governance and clarifies many of the terms and concepts related to AI assurance.
AI assurance is the process of measuring, evaluating, and then communicating the trustworthiness of AI systems to the government, regulators, and the market.
The practice is part of broader AI governance and an important way for organizations to make their AI trustworthy and to demonstrate it.
What Is the UK’s Approach to AI Regulation?
The UK explained their high-level approach in a whitepaper from March 2023. This whitepaper recognizes both the risks and the benefits of AI, and describes a “pro-innovation approach” that encourages growth while managing the downsides appropriately.
What Are the Five Principles of AI Assurance in the UK?
To that end, the UK’s approach to AI regulation is based on five principles:
- Safety, Security and Robustness: AI systems should operate safely, securely, and robustly. New risks should be identified and mitigated.
- Appropriate Transparency and Explainability: AI systems should be appropriately transparent and explainable, depending on the application context.
- Fairness: AI systems should not undermine legal rights, discriminate unfairly, or create unfair market outcomes.
- Accountability and Governance: Effective oversight of AI systems requires governance measures and clear lines of responsibilities.
- Contestability and Redress: Stakeholders should be able to content the outcomes of AI systems, where appropriate.
Notably, the UK’s approach to regulation is focused on outcomes and is application-specific: the risks of AI depend on how it is used and they must be understood in that context. Thus, the UK regulatory framework relies on existing sectoral legislation and authorities to regulate the risks of AI within each sector.
This differs significantly from the approach taken by the EU AI Act, which passed in late 2023 and uses a more centralized approach to regulate AI.
What Are the Different Kinds of AI Assurance Techniques?
AI assurance techniques are a way to measure and demonstrate compliance.
There are multiple types of AI assurance techniques with different outcomes.
To mention a few examples, a risk assessment seeks to identify the risks that an AI system poses, an impact assessment anticipates the outcomes of an AI system on all its stakeholders, and a bias audit demonstrates if the AI system can pose unfair biases.
Similarly, there are different types of standards with different objectives. Foundational and terminological standards provide common vocabularies and concepts to build a shared understanding. Measurement and test methods offer metrics and methods for evaluating AI systems along dimensions such as safety and security.
Lastly, the guide explains the different stakeholders in the AI assurance ecosystem and their roles, from the government to regulators who define the bar for AI systems to the standards bodies who develop and publish standards.
*Oh, one last important note, all parts of the AI system lifecycle (training data, the model and system, and the deployment context) are considered in scope for AI assurance.
Are There Software Tools to Help With AI Assurance and Compliance?
We’re glad you asked. Yes! FairNow is AI governance software that simplifies and centralizes AI risk management at scale.
Our software is built to ensure compliance and fairness at every level of your organization.
We feature many of the mechanisms described in the UK’s guidance above:
- Automated Bias Evaluations: This is where we shine! Our platform includes the functionality to scrutinize AI decisions and data for unfair biases, ensuring fairness and transparency across all AI operations. With our bias audit capabilities, companies are continuously assured that their models are fair or alerted immediately if something isn’t right. If your company’s data is incomplete or inconsistent, we can even run integration-free bias assessments using synthetic data.
- Model Risk Assessments: FairNow can help you determine the risk levels of your models based on their use cases and the standards and regulations they need to comply with. This helps your team prioritize their compliance efforts.
- Regulatory Compliance Tool Kit: Do you like to read hundreds of pages of evolving AI regulations weekly? We do! We’ll track the laws and standards while you focus on the business. This helps you to maximize your AI use cases while staying ahead of the regulatory curve.
AI Assurance Doesn’t Have to Be Arduous
Pop your name in the form below, and we’ll reach out to understand your current setup, your needs, and how we can help.
Keep Learning