Register

Unlock 2026: The Great Restack

Learn More
Akeneo-Logo Akeneo-Logo
Regulation Compliance

What is the EU AI Act?

Artificial intelligence is everywhere, and regulation is catching up. The EU AI Act is set to reshape how organizations build, buy, and use AI, far beyond Europe’s borders. That’s why we’re breaking down what the EU AI Act is, who it impacts, how the risk categories work, what compliance really requires, and what organizations can do now to prepare with confidence.

Table of Contents

    Keywords

    Artificial intelligence (AI)
    Commerce

    If your organization builds, buys, or uses AI, you’ve probably heard about the EU AI Act by now. 

    It’s the European Union’s landmark regulation designed to make AI safer and more trustworthy, while still supporting innovation. Much like the General Data Protection Regulation (GDPR), the EU AI Act sets common rules across EU member states and applies far beyond Europe’s borders. Even if your company isn’t headquartered in the EU, the Act can still affect you if your AI systems are used by EU-based customers, employees, or partners – or if their outputs impact people in the EU.

    For many organizations, the EU AI Act can feel overwhelming at first glance. It’s long, detailed, and full of legal terminology. But at its core, the regulation is built around a simple idea: the higher the risk an AI system poses to people, the greater the responsibility placed on the organization using or providing it.

    Here’s a practical, friendly guide to what it is, what it requires, who’s impacted, how to prepare, and what happens if you ignore it.

    What is the EU AI Act?

    The EU AI Act is a comprehensive regulation introduced by the European Union to govern how artificial intelligence systems are developed, deployed, and used. Its purpose is to ensure AI technologies are safe, transparent, and aligned with fundamental human rights, while still encouraging innovation and economic growth.

    Rather than treating all AI the same, the EU AI Act recognizes that not all AI systems pose the same level of risk. A product recommendation engine does not carry the same potential consequences as an AI system used to screen job applicants or assess creditworthiness. Because of this, the regulation takes a risk-based approach, tailoring requirements based on how much impact an AI system could have on individuals and society.

    Importantly, the EU AI Act applies not only to organizations based in the EU, but also to any organization that places an AI system on the EU market, uses AI systems in the EU, or produces AI outputs that affect people in the EU. This means global companies, much like with GDPR, cannot ignore the regulation simply because they are headquartered elsewhere.

    At a high level, the EU AI Act groups AI systems into three main risk categories, each with its own expectations and obligations:

    1. Unacceptable risk

    At the highest level are AI systems considered to pose an unacceptable risk to people’s rights and freedoms. These systems are banned outright because their potential for harm is deemed too great to mitigate through safeguards alone. Examples include AI systems that:

    • Manipulate or exploit vulnerable individuals in a way that causes harm
    • Enable social scoring by governments or organizations in ways that unjustly disadvantage people
    • Use certain types of biometric identification or categorization without appropriate legal justification
    • Infer sensitive personal characteristics (such as beliefs or orientation) from biometric data in prohibited contexts

    2. High-risk

    These AI systems are allowed to exist and be used, but only if organizations meet strict requirements designed to reduce the likelihood of harm. An AI system is typically considered high risk if it plays a role in decisions that can significantly affect someone’s life, opportunities, or safety. This includes areas such as:

    • Recruitment, employee management, and performance evaluation
    • Education and vocational training
    • Access to essential services like credit, insurance, or healthcare
    • Law enforcement, border control, and migration management
    • Safety components of regulated products and critical infrastructure

    3. Limited and minimal risk

    Most AI systems fall into the limited or minimal risk category. These are systems that do not significantly affect people’s rights or safety, but may still require transparency so individuals understand when AI is being used. Examples would include: 

    • Users may need to be informed when they are interacting with an AI system rather than a human
    • AI-generated or manipulated content, such as synthetic images, audio, or text, may need to be clearly disclosed
    • Certain emotion recognition or biometric categorization systems must be communicated to users in advance

    Who is impacted by the EU AI Act?

    At a high level, the EU AI Act applies to any organization that develops, sells, deploys, or benefits from AI systems that are used in, or have an impact on, the European Union. That includes companies based both inside and outside the EU.

    To make this easier to understand, the regulation defines several key roles. An organization may fall into one or multiple of the following roles:

    1. AI providers: organizations that build or offer AI systems

    Providers are organizations that develop an AI system or general-purpose AI model and place it on the market or put it into service under their own name or brand.

    This includes:

    • Software vendors building AI-powered products
    • Organizations fine-tuning or adapting existing models and offering them as part of their own solution
    • Companies embedding AI into hardware or digital products they sell

    Providers carry the largest compliance burden, particularly for high-risk AI systems. They are responsible for ensuring the system meets regulatory requirements before it reaches customers, including risk management, technical documentation, testing, and ongoing monitoring.

    If you sell AI-enabled software or embed AI into your products, even if it’s built on top of third-party models, you may be considered a provider under the AI Act.

    2. AI deployers: organizations that use AI in their operations

    Deployers are organizations that use AI systems as part of their internal processes or customer-facing operations. This is where many non-technical businesses are impacted.

    Deployers include organizations using AI for:

    • Hiring, performance management, or workforce analytics
    • Customer support chatbots or virtual assistants
    • Fraud detection, credit scoring, or risk assessment
    • Personalization, recommendations, or dynamic pricing
    • Content generation, translation, or product information enrichment

    Even if the AI system is purchased from a vendor, deployers are still responsible for how it is used. This includes ensuring the system is applied appropriately, that human oversight is in place when required, and that outputs are monitored for potential risks or errors.

    In other words, “we didn’t build it” is not a free pass under the EU AI Act.

    3. Importers and distributors: bringing AI into the EU market

    These are organizations that import or distribute AI systems within the EU also have responsibilities, including importers who place AI systems from non-EU providers onto the EU market and distributors who make AI systems available in the EU without modifying them

    While these roles don’t carry the same level of responsibility as providers, they are still expected to verify that AI systems meet basic compliance requirements and to cooperate with authorities if issues arise.

    This is especially relevant for global organizations that sell AI-powered tools across regions.

    4. Product manufacturers: AI embedded in physical or regulated products

    The AI Act also impacts manufacturers that integrate AI into physical products, such as:

    • Consumer electronics
    • Medical devices
    • Industrial equipment
    • Automotive systems

    If AI is part of a product’s functionality, particularly in safety-related or regulated contexts, the manufacturer may be treated as the AI provider and must meet corresponding obligations.

    This is especially important for organizations combining software, hardware, and AI into a single product experience.

    5. General-purpose AI (GPAI) providers

    The regulation introduces specific obligations for general-purpose AI models, which are designed to be adapted across many downstream use cases.

    Organizations providing these models, whether proprietary or open source, must meet transparency and documentation requirements, particularly if the model poses systemic risk due to its scale or capabilities.

    Even if you don’t sell a finished AI “application,” providing a model that others build on can still bring you into scope.

    Getting Ready for the EU AI Act, Phase 1: Discover & Catalog

    What are the key requirements for the EU AI Act?

    1. For high-risk AI systems (providers)

    High-risk providers should expect obligations such as:

    • A continuous risk management system across the AI lifecycle (identify, evaluate, mitigate, test, monitor)
    • Strong data and data governance expectations (data quality and appropriateness for intended purpose)
    • Technical documentation and record-keeping/logging to demonstrate compliance and support audits
    • Clear instructions for use for downstream deployers
    • Designed-in human oversight
    • Appropriate levels of accuracy, robustness, and cybersecurity

    2. For high-risk AI systems (deployers)

    Deployers (users of the system) also have real responsibilities, including:

    • Use the system according to instructions, with human oversight in place
    • Ensure input data is relevant for the intended purpose
    • Monitor operation and take action if risks emerge
    • Keep logs (where applicable) and cooperate with providers/authorities if issues arise

    3. Transparency obligations

    Depending on the system, organizations may need to:

    • Tell people when they’re interacting with an AI system (unless obvious or exempted)
    • Disclose/label AI-generated or AI-manipulated content (including deepfakes, and certain generated text rules)
    • Inform people when emotion recognition or biometric categorization is used (with specific exemptions mainly tied to lawful criminal justice uses.

    4. General-purpose AI (GPAI) model obligations

    If you provide a general-purpose AI model (the kind that can be integrated into many downstream applications), you’ll need to:

    • Maintain technical documentation (including training/testing and evaluation information)
    • Provide information to downstream providers integrating your model
    • Put in place a copyright policy
    • Publish a public summary of training data (at the level required by the Act)
    • Note: open-source GPAI models may have partial exemptions, except where “systemic risk” rules apply

    What are the penalties for non-compliance?

    Penalties are designed to be “effective, proportionate and dissuasive,” and Member States set enforcement details, but the AI Act sets maximum administrative fines that can be very large:

    • Up to €35 million or 7% of worldwide annual turnover (whichever is higher) for violating prohibited practices
    • Up to €15 million or 3% of worldwide annual turnover for violating key obligations (including many provider/deployer duties and transparency obligations)
    • Up to €7.5 million or 1% of worldwide annual turnover for supplying incorrect, incomplete, or misleading information to authorities

    Beyond fines, regulators can also require corrective actions and restrictions on systems, and the reputational risk can be significant.

    When does The EU AI Act go into effect?

    The AI Act rolls out in phases. A helpful high-level timeline from the European Commission’s AI Act Service Desk is:

    • 2 Feb 2025: General provisions (including definitions and AI literacy) + prohibitions apply
    • 2 Aug 2025: Rules for general-purpose AI (GPAI) apply; governance structures and national penalty regimes should be in place
    • 2 Aug 2026: “Majority of rules” apply; high-risk AI systems in Annex III and transparency rules begin applying; enforcement starts
    • 2 Aug 2027: Rules for high-risk AI embedded in regulated products apply 

    How organizations can prepare for the EU AI Act

    Preparing for the EU AI Act doesn’t require hitting pause on innovation or overhauling everything overnight. For most organizations, readiness starts with visibility and structure, not perfection. The goal is to understand where AI exists in your organization, how it’s being used, and what level of responsibility comes with each use case.

    1. Build an AI inventory (yes, even the “small” stuff)

    A strong first step is creating an AI inventory. This means identifying every AI system your organization builds, buys, embeds, or uses, including tools that might not immediately feel “high risk,” such as marketing personalization platforms, content generation tools, search and recommendation engines, customer support chatbots, fraud detection systems, or demand forecasting solutions. 

    Many organizations are surprised by how much AI they already rely on once they map it out.

    2. Classify each use case by risk tier

    From there, each AI system should be evaluated based on its intended purpose, who it affects, and the potential impact of its outputs. This allows you to classify systems according to the AI Act’s risk categories and identify where obligations may apply. 

    At the same time, it’s important to clarify your role for each system, whether you are acting as a provider, deployer, importer, or some combination of the three. This step is critical, as obligations differ depending on the role you play.

    3. Establish AI governance and evaluation practices

    Once visibility is established, preparation becomes an exercise in governance and consistency. Organizations should define clear ownership for AI-related decisions, including risk assessments, vendor selection, incident escalation, and documentation. 

    This often cuts across teams, and benefits from a shared framework rather than siloed decision-making. Building basic AI literacy across teams is also essential, as the regulation expects organizations to understand how their AI systems function and where risks may arise.

    4. Operationalize transparency

    From an operational standpoint, many AI Act requirements align with best practices organizations may already be working toward. Strengthening data governance, documenting training and evaluation processes, implementing human oversight where decisions have meaningful consequences, and monitoring AI performance over time all help reduce risk while improving system quality. 

    Transparency plays a key role, and organizations should be prepared to clearly communicate when AI is being used, especially in customer-facing or content-generating scenarios.

    5. Get serious about vendor and model governance

    Finally, vendor and partner management becomes increasingly important under the EU AI Act. Organizations should be prepared to ask AI vendors for documentation, compliance assurances, and clarity on how models are trained and governed. 

    Even when AI is sourced externally, responsibility for its use does not disappear. Treating AI governance as part of standard procurement and risk management processes helps avoid surprises later.

    The most important thing to remember is this: compliance is a journey, not a single milestone. Organizations that start early by building visibility, assigning responsibility, and embedding governance into everyday workflows will be far better positioned to meet regulatory expectations without slowing down innovation.

    Final thoughts on the EU AI Act

    The EU AI Act can feel intimidating because it’s comprehensive, but it’s also very “programmable” from an operational standpoint. If you can inventory systems, classify risk, document decisions, and put clear governance around high-impact use cases, you’re already most of the way there.

    If you want to learn more about how to prepare for the EU AI Act, you can check out the Gartner® report, Getting Ready for the EU AI Act, Phase 1: Discover & Catalog.

    Getting Ready for the EU AI Act, Phase 1: Discover & Catalog

    This Gartner® report outlines the foundational steps organizations must take to prepre for EU AI Act compliance.

    Casey Paxton, Content Marketing Manager

    Akeneo

    Gartner, Getting Ready for the EU AI Act, Phase 1: Discover & Catalog, Nader Henein, Gabriele Rigon, 28 October 2025. 

    Gartner is a trademark of Gartner, Inc. and/or its affiliates.

    Continue Reading....

    Regulation Compliance

    Digital Product Passport 2026: How Brands Turn the DPP Scan Into Revenue

    If you use the Digital Product Passport wisely, you can reduce support costs and generate...

    Regulation Compliance

    Are You Ready for Digital Product Passports?

    As more brands are transitioning to Digital Product Passports (DPP), there are crucial steps that...

    Regulation Compliance

    Using PIM to Prepare for Digital Product Passports

    Discover how businesses are using PIM to prepare for the EU’s Digital Product Passport...