Prepare for the EU AI Act with Ease

AI Compliance doesn’t have to be so complicated, use FairNow’s AI governance platform to:

 

  • Effortlessly ensure your AI is in harmony with both current and upcoming regulations
  • Stay ahead of compliance requirements before fees and fines become commonplace
  • Prioritize your AI Systems by the risk categories identified by the Act (no risk, limited, high risk, and prohibited)
  • Perform integration-free bias assessments on high-risk models
Model Governance

Be Ready for the EU AI Act

Is reading hundreds of pages of upcoming legislation not really your thing? Good news! It’s ours! Enter your email to stay informed and on track in real-time.

Your EU AI Act Readiness Roadmap

EU AI Act
AI Tools For Risk Management

Prioritize Efforts Based On Risk

With limited resources, prioritization is key. Let us help you create a detailed AI governance roadmap based on business risk.

Model Governance Framework

Know The Right Questions To Ask

In the rapidly evolving AI governance world, proper risk management depends on knowing the right questions to ask. Good news – we’re pretty good at that.

AI Tools For Risk Management

Create Ownership & Accountability

You set the rules, we’ll enforce them. We’ll ensure that the right person has the right access to the right model (at the right time of course!).

FAQs About EU AI Act Readiness

Who is Subject to Compliance Under the EU AI Act?

The short answer—almost everyone! Providers, importers, distributors, and users of AI systems placed on the market or put into service in the EU. However, there are some exceptions for certain AI systems and personal use cases, like AI in the military, which are not covered.

Does the EU AI Act Matter for Non-European Companies?

Big time!

The EU AI Act affects non-European companies in two key scenarios:

  1. When AI is developed by an EU provider, regardless of where it’s used globally.
  2. When AI systems from outside the EU enter the EU market.

So, if your company’s AI falls into either of these categories, you’ll definitely need to consider the rules and requirements of the EU AI Act.

When Does the EU AI Act Take Effect?

The one is a little more complicated, but let’s keep it at a high level!

Formal adoption is anticipated in early 2024.

After that, there will be a two-year general implementation period.

For prohibited AI systems and the rules for general-purpose AI systems, there will be specific 6-month and 12-month implementation windows, respectively.

Read the detailed EU AI Act timeline here.

What Are the Reporting and Documentation Requirements under the EU AI Act?

Under the EU AI Act, organizations must meet specific requirements for compliance:

Record-Keeping: Maintain up-to-date records about your AI system’s design, development, and operation.
Risk Assessments: Conduct thorough risk assessments and document the results.
User Information: Provide clear, accessible information to users about your AI system’s capabilities and risks.
Transparency: High-risk AI systems require detailed documentation of their inner workings.
Incident Reporting: Report safety, security, or compliance incidents promptly.
Compliance Documentation: Maintain evidence of compliance, including conformity assessments and risk mitigation measures.
Collaboration: Collaborate with regulatory authorities as needed.
Documentation Accessibility: Ensure all relevant documentation is readily available for review.

What Does the EU AI Act Say About ChatGPT and Other Generative AI Models?

ChatGPT and other generative AI models fall under a category the Act calls “general purpose AI systems” (GPAIS), which are defined as an AI system that –irrespective of how it is placed on the market or put into service, including as open-source software – is intended by the provider to perform generally applicable functions such as image and speech recognition, audio and video generation, pattern detection, question answering, translation and others; a general purpose AI system may be used in a plurality of contexts and be integrated in a plurality of other AI systems.”

General purpose AI systems (GPAIS), a category that includes foundation models like ChatGPT, are obliged to provide thorough technical documentation, comply with EU copyright law, and publish detailed summaries of the content used to train the model. 

For the most powerful GPAIS models, additional requirements would apply, including disclosure of the environmental impact of training and operating the models.

The classification of “powerful” GPAIS models would depend on the amount of computing power required to train them, but it’s currently unclear where this boundary lies.

What If My Company Doesn't Follow the EU AI Act? What Are the Consequences of Non-Compliance with the EU AI Act?

Let’s just say we wouldn’t recommend skipping this one! Discussions have already taken place about potential fines and fees for non-compliance. These fines could go as high as €35 million or 7% of global annual turnover for breaches.

What Is the Definition of “High-Risk” AI Systems Under the EU AI Act?

The EU AI Act defines “high-risk” AI systems as those that pose a significant risk to the health, safety, or fundamental rights of individuals. Examples of such systems include those used in critical infrastructure, education, employment, financial credit scoring, law enforcement, and border control.

What Are the Eight Types of High-Risk Systems in the EU AI Act?

The EU AI Act outlines eight types of high-risk systems that can have a significant impact on the life chances of a user. These systems are subject to stringent obligations and must undergo conformity assessments before being put on the EU market.

The eight types of high-risk systems are:

  1. Biometric identification systems
  2. Systems for critical infrastructure and protection of the environment
  3. Education and vocational training systems
  4. Systems used in employment, talent management, and access to self-employment
  5. Systems affecting access and use of private and public services and benefits, including those used in insurance
  6. Systems used in law enforcement
  7. Systems to manage migration, asylum, and border control
  8. Systems used in the administration of justice and democratic processes, including systems used on behalf of the judicial authority
How Can Enterprises Best Prepare for EU AI Act Compliance?

We’re so glad you asked!

Enterprises can take proactive steps to establish AI risk management frameworks. These frameworks are crucial not just to ensure compliance but also to minimize potential legal, reputational, and financial risks down the road. The rules and principles laid out in the Act, along with related industry standards, are likely to become the norm.

Organizations looking to stay ahead of the curve and gain a competitive edge should keep a close eye on developments related to EU AI Act readiness this year. Participating in voluntary commitment schemes introduced during the Act’s implementation period can also be a smart move.

What does this look like practically? We recommend the following four steps to get you started:
1. Brush up on the act and its implications, understand the scope and risk-based approach
2. Inventory models and assess risk, flag high-risk models and create accountability
3. Invest in AI governance tools,
find the tech that meets your risk profile
4. Automate audits and compliance checks,
help your stakeholders sleep easy knowing your team is always compliant

Staying informed and engaged will be key to achieving EU AI Act readiness. Not sure you have the time to sift through hundreds of pages of new legislation? Luckily, we already do that for fun! Add your email to the form below and we’ll keep you informed anytime new legislation breaks.

Demo Our EU AI Act Readiness Toolkit

Have we mentioned that compliance is our love language?

FairNow helps teams prepare for the EU AI Act by centralizing their AI governance company-wide, creating accountability, assessing risk, and enforcing responsible AI use, let us show you how!