The UK’s pro-innovation approach to AI regulation


Section 1 - Description


Section 2 - Requirements


Section 3 - What’s next


Section 4 - Notes

Section 1


On March 29, 2023, the UK published a white paper laying out how it was approaching AI regulation. This is the next development of a push towards national-level regulation of AI in the UK and comes at a time when Canada and the European Union are actively defining their regulatory schemes for AI. The framework has three main objectives:

  1. Drive growth and prosperity
  2. Increase public trust in AI
  3. Strengthen the UK’s position as a global leader in AI

The details align with Canada’s AIDA and the EU’s AI Act in terms of risks considered: the paper describes a risk-based approach where regulations are commensurate with the potential for harm. However, the UK’s approach is notable for its sharply pro-innovation stance. The opening of the whitepaper makes clear the desire to reap future benefits of AI, and this theme is emphasized throughout the document. AIDA and the AI Act both mention a desire to balance risk management with the facilitation of AI development, but neither are as explicit and forward about it as the UK’s whitepaper.

We expect the subsequent UK legislation to align with the EU’s AI Act on the risks of AI, but given the UK’s explicit pro-innovation focus, it’s possible that the UK legislation diverges from the AI Act based on a difference in risk trade-offs. The article even mentions the UK’s departure from the EU as an opportunity to differentiate itself with a more pro-innovation framework.

Section 2


The framework will be driven by the following principles:

  • Safety, security and robustness
  • Appropriate transparency and explainability
  • Fairness
  • Accountability and governance
  • Contestability and redress

The authors state they wish to take a more gradual approach to rolling out regulations in order to avoid stifling innovation with cumbersome laws. Following an initial implementation period, regulators would then be assigned the duty of 

The proportionality focus means regulating the use case, not the technology or sector as a whole. The authors decline categorizing AI as high risk based on the technology or sector and instead will make regulations commensurate with the risk of outcomes. For example, not all AI focused on critical infrastructure should be considered critical (as it is per the EU’s AI Act); some cases, like detecting superficial scratches on machinery, obviously have minimal risk.

A key part of the approach is tasking existing regulatory bodies to carry out AI regulation. This federated approach, they hope, allows AI regulation to be defined by the bodies that understand the use cases best and avoids potential incoherence from one regulator creating a one-size-fits-all policy. This differs from the EU AI Act and Canada’s AIDA which both task AI regulation to single central authorities. Some people are skeptical about this approach, claiming that the existing regulators lack sufficient enforcement power and the adaptability to keep pace with fast-changing, novel AI applications.

Section 3

What’s next:

Over the next year, regulators will issue practical guidance on how practitioners can implement the key principles in their sectors.

By the timelines given recently, we shouldn’t expect to see an initial draft of the framework in the coming 12 months. With the plan for an initial non-statutory implementation period, it will likely be even longer before regulatory requirements are enforced.

Section 4


The whitepaper defines “AI” in a somewhat oblique way as systems that are “adaptive” and “autonomous”. The stated goal of this definition is to be more future-proof in light of rapid changes in AI form and capability. This differs from definitions given by other regulations that define AI by capability to automatically generate output (test, a prediction, etc) given input data. It’s possible that this definition leads to confusion over what types of systems are classified as AI. For example, if an employer uses the output of an ML model predicting technical skill level to influence (but not automate) hiring decisions, would this not classify as AI because it’s not autonomous? According to the definition, it seems so.

The current approach poses some risks. The light-touch approach to AI regulation raises the possibility of a divergence against other national-scale frameworks, which could lead to a fragmentation of policies that makes it harder for companies to offer AI services in both the UK and elsewhere.