Section 1 - Description
Section 2 - Scope
Section 3 - Requirements
Section 4 - Penalties
Section 5 - Status
The AI Act aims to create a risk-based, uniform legal framework for the safe development and usage of AI throughout the entire European Union. The specific objectives of the bill are: (1) ensure AI used in the EU is safe and respects existing law on fundamental rights and EU values, (2) create legal certainty to assist development of AI in the EU, (3) enhance the EU’s capabilities to effectively govern and enforce against these risks and (4) prevent market fragmentation by creating a single EU market for safe and trustworthy AI. The act intends to foster AI investment and innovation in the EU, recognizing the need for AI development, but only if done safely.
The Act is the first broad AI risk management act with national and international scope. It’s expected that the bill could become a broadly followed standard and have global influence over AI regulations from other parts of the world.
Here, AI is defined as software that can generate outputs such as content, predictions, recommendations or decisions that align towards human-defined objectives. The authors note the importance of keeping this definition broad and future-proof in light of the speed of AI advancement.
The regulation applies to anyone based in the EU using AI systems, or anyone using AI systems to impact people in the EU.
The act covers the development and usage of AI broadly with the exception of military and national defense purposes. Real-time biometric surveillance of public spaces is banned except where necessary for law enforcement.
The EU AI Act classifies AI systems into different tiers based: applications with unacceptable risks are prohibited. and requirements for other AI systems correspond to the level of risk they pose.
Several types of applications are banned outright:
- Social scoring
- Emotion recognition systems used in school or the workplace
- Biometric categorization systems that use sensitive personal information
- Untargeted scraping of facial images to create facial recognition databases (exceptions are made for narrow applications by law enforcement such as targeted searches for victims or criminals)
- AI that manipulates human behavior
- AI that exploits the vulnerabilities of people based on criteria like age, disability or social situation
High risk applications
Beyond this, high risk AI systems are classified according to the risks they pose. These will include:
- Critical infrastructure that could put citizen’s health and life at risk
- Access to educational and vocational training
- Safety components of products
- Employment, access to self-employment, and worker management
- Access to “essential” private and public services such as benefits, banking and insurance
- Law enforcement
- Migration, asylum and border control
- Administration of justice and democratic processes
Before being placed on the EU market, high-risk AI systems must meet requirements covering:
- Implementing both a Risk Management System and a Quality Management System
- An assessment proving conformity
- An assessment proving respect for fundamental rights
- Identification and mitigation of risks
- A cybersecurity risk assessment
- Ensuring the quality of data sets
- Clear user information and the ability for consumers to file complaints and request model explanations
- Sufficient logging of model activity
- Effective human oversight of the system
General purpose AI systems (GPAIS)
General purpose AI systems (GPAIS), a category that includes foundation models like ChatGPT, are obliged to provide thorough technical documentation, comply with EU copyright law, and publish detailed summaries of the content used to train the model. For the most powerful GPAIS models, additional requirements would apply, including disclosure of the environmental impact of training and operating the models. The classification of “powerful” GPAIS models would depend on the amount of computing power required to train them, but it’s currently unclear where this boundary lies.
Low risk applications
Low risk applications, such as content recommender systems and spam filters, are not subject to the stringent requirements posed to high risk applications.
Non-compliance with the Act can lead to fines ranging from €7.5M or 1.5% of global turnover (whichever is higher) to €35M or 7% of global turnover depending on the size of the company and the nature of the infraction.
Agreement between European Parliament and Council negotiators was reached in trilogue in December 2023. A draft of the final text was leaked in January 2023 and confirmed to be valid, but small technical details may be changed before the text is officially published.
The Act will enter enforcement gradually over the next three years. Companies must stop use of prohibited systems within 6 months of the enforcement date. The rules for general purpose AI will apply after 12 months, rules for high-risk AI systems will apply after 36 months, and all other requirements will apply after 24 months.