Overview of EU's AI Act
This document provides a comprehensive overview of the European Union's Artificial Intelligence Act (EU AI Act), a landmark legislation designed to regulate AI systems within the EU market. We will explore its primary goals, which include ensuring AI systems are safe, transparent, non-discriminatory, and environmentally friendly, while also fostering innovation. This guide will detail the Act's risk-based approach, categorization of AI systems, compliance requirements, penalties, and enforcement timeline, offering a clear understanding of this pivotal regulation.
For a comprehensive exploration of the act, you can visit the EU's AI ACT Explorer.
To assess your compliance, use the AI ACT Compliance Checker.
Want to use a chatbot to get an overview? Talk to our AI ACT Assistant
Overview of EU's AI Act
This document provides a comprehensive overview of the European Union's Artificial Intelligence Act (EU AI Act), a landmark legislation designed to regulate AI systems within the EU market. We will explore its primary goals, which include ensuring AI systems are safe, transparent, non-discriminatory, and environmentally friendly, while also fostering innovation. This guide will detail the Act's risk-based approach, categorization of AI systems, compliance requirements, penalties, and enforcement timeline, offering a clear understanding of this pivotal regulation.
For a comprehensive exploration of the act, you can visit the EU's AI ACT Explorer.
To assess your compliance, use the AI ACT Compliance Checker.
High-Level Summary
The World's First Comprehensive AI Law
The EU AI Act is the world's first major law for AI. It creates rules to make sure AI systems used in the EU are safe, ethical, and respect basic rights like privacy. This law covers all important aspects of AI, including its social impact, setting strong standards for how AI should be used.
Creation Of A Risk-Based Approach
This Act categorizes AI systems by how much risk they pose to people and society. High-risk AI systems face strict checks, while lower-risk ones have fewer rules. This helps foster new ideas while building trust in AI. The main risk categories are: Unacceptable Risk (Forbidden), High Risk, Limited Risk, and Minimal Risk.
Strict And Significant Penalties For Non-Compliance
Not following the rules can lead to large financial penalties. For serious rule-breaking, like using banned AI, companies could be fined up to €35 million or 7% of their total yearly global sales (whichever is higher). For less severe issues, like not meeting documentation rules for high-risk AI, fines can be up to €15 million or 3% of their total yearly global sales.
Creation Of Distinct Stakeholder Groups With Distinct Responsibilities
Providers
These are the companies that create, make, or sell AI systems. They must make sure their AI systems follow the EU AI Act's rules for safety, openness, and record-keeping. They also need to make sure their staff understands AI.
Deployers
These are the people or organizations that use AI systems in the EU. They need to be open about how they use AI, manage any risks, and tell users when they're interacting with AI. They also need to make sure their staff understands AI.
Importers
These are EU companies that bring AI systems into the EU from outside the EU. They must make sure these imported AI systems follow the EU AI Act. This means checking what the provider did, keeping the right documents, and making sure the AI is properly labeled and has clear instructions.
Distributors
These are businesses that make AI systems available in the EU market, but aren't the ones who made them or brought them from outside the EU. They need to check that AI systems meet EU rules, keep the necessary paperwork, track the systems, and report any problems or risks to the authorities.
EU AI Act Scope: Who needs to follow the AI Act?
The EU AI Act requires the following groups to follow its rules:
Providers: Companies that create, sell, or put AI systems into use in the EU.
Users: Anyone using AI systems in the EU, especially those using high-risk AI systems.
Importers: Companies that bring AI systems into the EU market from outside.
Distributors: Companies that sell or offer AI systems within the EU.
Third-Party Evaluators: Groups that check if AI systems meet the required standards.
Non-EU Providers: Providers outside the EU if their AI systems are used in the EU or impact people there.
These groups must follow the rules based on how risky the AI systems they handle are.
Quote on EU AI Act implementation from Ashley Casova, Managing Director of IAPP
EU AI Act Compliance Requirements
The EU AI Act sets forth specific compliance requirements for AI systems based on their risk classification. Here's a summary of the key compliance requirements for each risk category:
EU AI Act Risk Categories
From February 2, 2025, companies that provide or use AI systems must ensure their employees understand AI well enough for their specific roles. This understanding should fit their job, experience, and how they use the AI. The European AI Office has even shared a guide, the "Living Repository of AI Literacy Practices," with examples from over a dozen organizations to help companies with this training.
1
2
3
4
1
Unacceptable Risk
Banned AI systems
2
High Risk
Strict compliance requirements
3
Limited Risk
Transparency obligations
4
Minimal Risk
Voluntary codes of conduct
Risk Categories in Detail
1
Unacceptable Risk
Starting February 2, 2025, AI systems in this category are banned in the EU. This includes AI used for social scoring, certain biometric identification, or manipulating people's behavior by exploiting their weaknesses like age or disability.
2
High Risk
  • Risk Management: Set up a system to find, check, and lower risks throughout the AI system's life.
  • Data and Data Governance: Make sure the data used for training, checking, and testing is good quality, useful, fair, and free of bias.
  • Technical Documentation: Keep detailed records about the AI system, its design, how it was built, and how it works.
  • Transparency and Information: Give users clear details about what the AI system can do, what it can't, and how to use it properly.
  • Human Oversight: Put in place ways for humans to effectively supervise the AI to prevent or reduce risks.
  • Accuracy, Robustness, and Cybersecurity: Make sure the AI system is very accurate, strong, and secure against cyber threats.
  • Conformity Assessment: Check if the AI system meets standards before selling it. Some systems need checks by outside experts.
  • Registration: Register high-risk AI systems in the EU's special database before they are used.
3
Limited Risk
Transparency: Users must know when they are interacting with an AI system (e.g., chatbots must state they are AI).
4
Minimal Risk
These AI systems have no mandatory rules. Companies are encouraged to follow voluntary guidelines for best practices.
The "High Risk" category under the EU AI Act has the most rules for companies that develop, use, import, or sell AI in the EU. These rules are designed to make sure AI systems in the EU are safe, clear, and respect basic human rights.
EU AI Act Penalties and Fines
The EU AI Act sets out serious punishments if companies don't follow its rules.
These penalties are strict to stop violations and make sure everyone is responsible. Here are the main penalties:
€35M
Banned AI Use
If you put AI systems on the market, use them, or make them available when they are considered an unacceptable risk: Fines can be up to €35 million or 7% of the company's total yearly global sales (whichever is higher).
€15M
High/Limited Risk Rule Breaking
If you don't follow the rules for high-risk or limited-risk AI systems (like issues with data quality, documents, transparency, human oversight, or how strong the AI is): Fines can be up to €15 million or 3% of the company's total yearly global sales (whichever is higher).
€7.5M
Wrong Information
If you give false, incomplete, or misleading information to official authorities: Fines can be up to €7.5 million or 1% of the company's total yearly global sales (whichever is higher).
These penalties aim to make sure all companies involved with AI in the EU follow the strict rules of the AI Act, helping to create safe and reliable AI.
Enforcement Timeline: When does the AI Act go into effect?
The AI Act will start to be enforced in steps, beginning on August 1st, 2024.
Most of the AI Act rules will apply from August 2nd, 2026, but some parts will begin as early as February 2025.
1
February 2nd, 2025
After six months: Companies must make sure their employees understand AI, and AI systems with "unacceptable risk" will be banned.
2
August 2nd, 2025
After 12 months: Rules for general-purpose AI systems, like ChatGPT, will start to apply.
3
August 2nd, 2026
After 24 months: All AI Act rules will apply, including those for high-risk systems listed in Annex III.
4
August 2nd, 2027
After 36 months: Rules will apply to high-risk AI systems not in Annex III but used as safety parts in products.
High-Risk Systems in Annex III
This deadline is significant as it encompasses a wide range of high-risk systems, including, but not limited to:
HR Technology
Systems such as those used to source, screen, rank, and interview job candidates.
Financial Services
Systems used for credit scoring, anti-money laundering (AML), and fraud detection.
Insurance Underwriting
Systems such as those used for calculating premiums, assessing risks, and determining coverage.
Education and Vocational Training
Systems determining access to educational opportunities, personalized learning plans, and performance assessments.
Many more are outlined in Annex III.
These examples illustrate just a few of the many high-risk AI systems affected by the Act. The breadth of these regulations underscores the critical need for stringent compliance measures to ensure ethical and safe usage across various sectors.
EU AI Act Readiness
1) Check If Your AI Is Covered
Wondering if the EU AI Act applies to your AI? You can start with using the compliance checker tool at the top of this document.
*Note: This tool is for information only and not legal advice. Laws can change, and new rules might not be included. Always check with a legal expert to ensure you meet all current requirements.
2) Create Your AI List
  • Gather a team to list all your AI tools and models.
  • Describe what each system does and its abilities.
  • Name the team or person in charge of each model.
3) Find High-Risk Models
  • Figure out the risk level for each AI system based on the EU AI Act's rules: unacceptable, high, limited, or minimal risk.
  • Remember general-purpose AI systems (like ChatGPT) have rules that start just 12 months after the Act begins.
  • Order risks by how serious they are to deal with urgent issues first.
4) Set Up AI Management
  • Keep detailed records of each AI system's journey: its purpose, what it can do, and how it makes decisions.
  • Regularly check your AI systems for fairness, how well they explain themselves, and how reliable they are.
  • Make sure humans are always supervising the AI.