Home 5 FairNow Blog 5 When Does the EU AI Act Come into Force? The Full EU AI Act Timeline

When Does the EU AI Act Come into Force? The Full EU AI Act Timeline

Jan 8, 2024 | FairNow Blog

By Kayla Baum
AI Governance

The Question Everyone Is Asking, When Does The EU AI Start?

The EU AI Act will feature a staggered rollout when it comes to enforcement.

Businesses must thoroughly understand the risk level and details of their AI usage in 2024 to prevent consequences with the earliest rollouts of the Act.

This is what we know so far and how you can stay ahead of these regulations as they evolve.

A Quick Review Of The EU AI Act Timeline Thus Far:

The EU AI Act has made some significant progress since its proposal in April 2021.

April 21st, 2021: The European Commission introduced a proposal to regulate artificial intelligence in the European Union. Alongside the AI Act, they also unveiled a coordinated plan that outlines collaborative actions between the Commission and member states.

June 14th, 2023: The European Parliament approved its stance on the AI Act with 499 votes in favor, 28 against, and 93 abstentions.

December 8th, 2023: A political agreement was reached among European decision-makers, the European Parliament, and the Council of the EU.

So, What Happens Next With The EU AI Act?

EU AI Act

Early 2024 (Probably)

We expect to see the adoption of the EU AI Act in early 2024. Note the probably. This is AI Governance, after all; anything can happen!

Mid-Late 2024 (Probably)

Assuming the adoption of the EU AI Act in early 2024, prohibited uses of AI will be banned 6 months later.

Prohibited uses of AI are AI systems that threaten individual’s fundamental rights.

Unacceptable risk systems encompass those with a high potential for manipulation, either through subtle messaging and stimuli, or by taking advantage of vulnerabilities such as socioeconomic status, disability, or age.

These are categorized into:

  • AI systems or apps that secretly influence human behavior (subliminal practices).
  • Manipulative tactics, like toys with voice assistance that encourage risky behavior in children
  • Social evaluation systems (social scoring) by governments or companies will be prohibited.
  • Some uses of biometric systems, like emotion recognition at workplaces, will be prohibited.
  • Additionally, certain systems for categorizing people or instantly identifying them remotely in public spaces for law enforcement purposes will be restricted, with a few exceptions.

Early 2025 (Probably)

Assuming the adoption of the EU AI Act in early 2024, provisions for general-purpose AI systems and foundation models will be enforced 12 months later.

General-purpose AI systems are versatile AI that can be used for many things. Foundation models are large AI systems that can do various tasks, like generating text, images, or computer code.

The AI Act focuses on rights and transparency for foundation models and requires assessments for high-risk AI systems used in insurance and banking.

Specific obligations for general-purpose AI systems which could pose major risks include:

  • Risk assessment using advanced methods
  • Ensuring strong cybersecurity
  • Monitoring and disclosing energy usage

Providers must also follow copyright laws and provide details on how they trained the AI model.

Additionally, there’s a rule to register in a European database, which might cause legal disputes with material rights owners regarding copyright and privacy laws.

Early 2026 (Probably)

Assuming the adoption of the EU AI Act in early 2024, enforcement of the full EU AI Act will be enforced in early to mid 2026.

This the big one, no matter which industry you operate in.

You will want to have a meticulously detailed record of each and every AI system you build, buy, and use.

The enforcement of the EU AI Act in 2026 will present several challenges for businesses:

1. Definition of AI: The current definition of AI in the Act is broad and could cover statistical software and even general computer programs. Tech companies have requested a narrower definition of ‘AI systems.’

2. Prohibition on Social Scoring: This could impact credit and insurance industries as it could limit their ability to assess risk.

3. Privacy and Personal Rights: The Act addresses the potential for AI systems to infringe on privacy and personal rights due to their ability to process vast amounts of data.

4. Liability: The Act will give people and companies the right to sue for damages after being harmed by an AI system.

These issues will make inventory assessment more complex and classification more challenging.

It’s important for businesses to prepare for these changes and understand how they will impact their operations.

Do You Have to Follow the EU AI Act?

Well – of course, you don’t have to do anything. But you are going to want to take the EU AI Act very seriously.

Violation of the EU AI Act carries SIGNIFICANT fines.

These fines are set to range from 7.5 million euros or 1.5% of a company’s revenue (whichever is higher) – to a striking 35 million euros or 7% of their global yearly revenue.

To put things in perspective, these fines are much higher than what GDPR imposes, which ranges from 2-4% of annual revenue.

Of course, fines are just one aspect of the risk the company takes on by playing fast and loose with AI regulations.

It’s important also to consider:

  • reputational damage
  • data privacy concerns
  • additional legal risks
  • bias in decision-making
  • increased third-party risks
  • security vulnerabilities

    What Should You Do Now?

    This staggered rollout aims to simplify rule-making and encourage companies to follow these rules voluntarily.

    Strategic leadership teams are already preparing for the upcoming regulations.

    You have a few options when it comes to tracking and implementing protective measures in advance of the EU AI Act’s full adoption.

    The first, of course, is to read through hundreds thousands of pages of proposed legislation and highlight the most important clauses in regard to your company.

    Or, let us handle that!

    Two Options to Stay on Top of Upcoming Regulations Depending on Your Size, Industry, and Risk Tolerance

    1: Simply pop your email in the form below, and we will only update you when there are meaningful changes to the regulations that impact you most.

    2: If you’re a large organization, you will want to implement AI governance software to centralize and simplify the process.

    If that’s the case, you’ve come to the right place.

    Fortune 500 companies depend on FairNow for continuous model testing, regulatory compliance tracking and prep (e.g. NYC LL144, EU AI Act), and bias assessments.

    Let us know what you need help with here, and we can show you which tools are going to make your life a lot easier.

    About Kayla Baum

    About Kayla Baum

    Kayla Baum is the VP of Marketing at FairNow. Kayla is an advocate for strong AI governance, strong responsible AI principles, and strong coffee.

    Be Ready for the EU AI Act

    Is reading hundreds of pages of upcoming legislation not really your thing? Good news! It’s ours! Enter your email to stay informed and on track in real-time.

    Let’s Talk Tech

    Let us know what we can help with and we will reach out to arrange a demo.