Back to Blog
Compliance17. april 202613 min

EU AI Act Compliance: The Complete 2026 Guide for Deployers and Providers

Definitive guide to the EU AI Act: what it covers, who it applies to, the four risk tiers, deployer obligations under Article 26, FRIAs under Article 27, GPAI rules, penalties, and a practical 12-step roadmap. Built to pair with Fronterio's Design → Govern → Prove platform — where the compliance floor is free forever and nightly Autopilot auto-evidences 6 of your 8 deployer obligations.

What the EU AI Act is, in plain English

The EU AI Act — formally Regulation (EU) 2024/1689 — is the world's first horizontal law on artificial intelligence. It entered into force on August 1, 2024 and applies on a staggered schedule through 2027. It is not optional, it is not a framework, and it is not SOC 2 with an AI sticker. It is binding law with fines of up to €35 million or 7% of worldwide turnover.

The Act regulates AI by risk, not by technology. Four tiers sit on top of a single question: what is the system used for, and who does it affect. An AI model that plays chess is minimal risk. The same underlying model, deployed to screen CVs, is high-risk and pulls eight legal obligations onto the company using it. The technology does not change; the use case does. That is the core mental model every compliance conversation starts from.

Three audiences end up inside this law. Providers are companies that build AI systems and place them on the EU market — OpenAI, Anthropic, Microsoft, and anyone training or fine-tuning a model for commercial release. Deployers are companies that use those AI systems under their own authority: a bank running a credit-scoring model, an HR team using an AI sourcing tool, a hospital piloting a triage assistant. The Act gives both groups real duties, but deployer obligations are the ones most commonly underestimated — and the ones most EU companies will be judged against.

Timeline: the dates that actually matter

The Act staggers enforcement across four milestones. Ignore the ones that do not apply to you and focus hard on the ones that do.

February 2, 2025 — prohibited practices and AI literacy. Chapter II (Article 5, banned uses) and Article 4 (AI literacy) are already in force. Social scoring, emotion recognition in the workplace, real-time biometric identification in public spaces, predictive policing based on profiling, and untargeted scraping of facial images are prohibited outright. Every organisation using AI must ensure staff involved in deployment and operation have sufficient AI literacy. This is already a legal requirement, not a future one.

August 2, 2025 — general-purpose AI rules and governance. GPAI providers (model makers releasing foundation models into the EU market) must publish technical documentation, copyright compliance statements, and training data summaries. National competent authorities must be designated. The EU AI Office supervises GPAI with systemic risk directly. If your vendor is OpenAI, Anthropic, or Google, they have already completed most of this and should be providing you with the downstream documentation you need under Article 53.

August 2, 2026 — general applicability of the high-risk regime. This is the big one. Most provisions of the Act — high-risk classification, provider obligations (Articles 9–15), deployer obligations (Article 26), FRIAs (Article 27), transparency duties (Article 50), post-market monitoring (Article 72), serious-incident reporting (Article 73) — become enforceable. If your company is running or plans to run any Annex III use case (HR, credit, insurance, education, law enforcement, essential services, critical infrastructure), this is the deadline that matters.

August 2, 2027 — embedded high-risk systems. Article 6(1) high-risk AI that is part of products already covered by Annex I sectoral legislation (medical devices, machinery, toys, lifts) gets an extra year. If you are not in one of those sectors, this date is not yours.

Who the Act applies to: providers, deployers, importers, distributors

The Act is explicit about four operator roles. Most of the confusion in early compliance conversations comes from companies guessing which role they are in. Guess wrong and you prepare the wrong documentation for the wrong deadline.

A provider is a natural or legal person that develops an AI system — or has one developed — and places it on the EU market or puts it into service under its own name or trademark. Anthropic is a provider of Claude. OpenAI is a provider of GPT. If you fine-tune a foundation model and resell the result as your own product, you become a provider of that system under Article 25.

A deployer is a natural or legal person using an AI system under its authority. The overwhelming majority of companies in the EU are deployers, not providers. You use Copilot for coding assistance, Gemini for email drafts, a specialist HR tool for CV screening. You did not build the model. But you chose to use it on your staff, your customers, or your data. The deployer obligations under Article 26 apply to you.

An importer is established in the EU and places an AI system from a non-EU provider on the EU market. A distributor is any other operator in the supply chain making an AI system available. Both have lighter duties than providers but carry liability if they knowingly distribute non-conforming systems.

If you are a deployer that modifies a high-risk AI system substantially — changing its intended purpose or altering how it makes decisions — Article 25 treats you as a provider of the modified system and all provider obligations attach. Fine-tuning a high-risk model on your own data counts. Swapping out a scoring threshold based on a new internal policy counts. This is the gotcha that catches the most companies off-guard.

The four risk tiers explained

The Act sorts every AI system into one of four tiers. Each tier comes with a different regulatory load. Getting your classification right is the single most important thing you will do in year one.

Unacceptable risk (Article 5). Banned outright. Social scoring by public authorities, subliminal manipulation that causes harm, exploitation of vulnerabilities (age, disability, socio-economic), real-time remote biometric identification in publicly accessible spaces by law enforcement (with narrow exceptions), predictive policing based solely on profiling, emotion recognition in the workplace or education (with narrow medical and safety exceptions), biometric categorisation based on sensitive attributes, and untargeted scraping of facial images from the internet or CCTV. Fines for prohibited practices are the highest bracket: €35 million or 7% of worldwide annual turnover.

High-risk (Article 6 and Annex III). Allowed, but with the full obligation stack. Two routes in: (1) Article 6(1) — AI is a safety component of a product already covered by Annex I legislation (medical devices, machinery, toys, lifts, vehicles); (2) Annex III — AI is used in specified sensitive areas: biometrics, critical infrastructure, education, employment and HR, essential private and public services (including credit scoring and health insurance risk assessment), law enforcement, migration and border control, administration of justice and democratic processes. If you are a bank running automated credit decisions, an HR team using AI sourcing, or an insurer pricing risk algorithmically — you are in this tier.

Limited risk (Article 50). Transparency-only obligations. Chatbots must disclose they are AI to the person they interact with. Deepfakes must be labelled. AI-generated content must be marked in a machine-readable way. Emotion recognition and biometric categorisation (where permitted) must be disclosed to the people affected. No FRIA, no human-oversight plan, no technical documentation — just honest disclosure.

Minimal risk. Everything else. Spam filters, video game NPC AI, inventory-forecasting models. No obligations under the Act, though voluntary codes of conduct are encouraged. The overwhelming majority of AI systems in commercial use sit here.

Article 26: the eight deployer obligations you must meet

This is the heart of the Act for most EU companies. If you run any high-risk AI system under your own authority, these are the eight duties you carry.

1. Use the system in line with the provider's instructions (Art 26(1)). Read the instructions for use. Do not deploy the system for a purpose the provider did not validate. Document your intended use.

2. Assign trained human oversight (Art 26(2)). Named individuals with the authority, competence, and time to actually supervise the system. Not a token role — they must understand the system's capabilities and limitations and intervene when it misbehaves.

3. Ensure input data relevance (Art 26(4)). Where you control input data, you must take care that it is relevant and sufficiently representative of the intended purpose. Garbage in, garbage out — and the liability is yours.

4. Monitor operation and report serious incidents (Art 26(5) + Art 73). Log outputs, watch for drift, malfunctions, and incidents that affect fundamental rights. Serious incidents must be reported to the competent authority within 15 days. If the incident involves death or serious harm, within 2 days. This is the 48h+15d deadline clock that bites hard.

5. Keep logs (Art 26(6)). System-generated logs must be retained as long as appropriate to the intended purpose — in practice, at least six months, and longer for traceability. The log must allow post-hoc investigation of how a specific decision was made.

6. Inform workers (Art 26(7)). Before deploying a high-risk AI system in the workplace, inform workers' representatives and affected workers. This triggers in most EU member states under existing labour law too; the Act aligns and amplifies.

7. Register in the EU database (Art 26(8)). Public authorities and EU bodies deploying Annex III systems must register each deployment in the EU-wide public database.

8. Cooperate with national competent authorities (Art 26(12)). When the authority asks for documentation, logs, or explanation, you provide it. No privilege. No delay.

On top of these eight, Article 4 (AI literacy) and Article 50 (transparency to affected persons) apply in parallel.

Article 27: Fundamental Rights Impact Assessments (FRIAs)

FRIAs are the piece of the Act that trips up companies who assumed their existing DPIA templates would carry them through. They will not. A FRIA is its own exercise, with its own scope, output, and legal standing.

Who needs one: deployers that are (a) bodies governed by public law, (b) private entities providing public services, or (c) deployers using Annex III high-risk systems in specified areas — in practice, credit scoring, life and health insurance risk assessment, and anyone assessing natural persons' creditworthiness or health risk. If you are an HR team, a bank, an insurer, or a government agency using high-risk AI on people, you probably owe a FRIA.

What a FRIA must document (Art 27(1)): the intended use and context of deployment; the categories of natural persons likely to be affected; specific risks of harm to fundamental rights; a description of human oversight measures; the measures to mitigate those risks if they materialise; and the governance arrangement, including complaint and redress mechanisms.

When it happens: before first putting the high-risk system into use. And again if any of the elements above change materially. The FRIA is not a one-off artefact. It is a living document that tracks the deployment throughout its life.

Who sees it: the deployer notifies the national market surveillance authority of the results. Some elements may end up in the public EU database. Your FRIA is not a private internal memo.

The most common mistake: treating the FRIA as a compliance narrative instead of an actual risk assessment. The authorities will read these. They will compare them against incident reports, complaints, and your deployed behaviour. A FRIA that claims minimal risk for a system that later generates discrimination complaints is a liability, not a shield.

GPAI: what it means when your vendor is OpenAI, Anthropic, or Google

General-purpose AI (GPAI) covers foundation models trained on broad data for a wide range of downstream tasks. Articles 51 through 55 handle them separately from Annex III high-risk systems. Most EU companies are not GPAI providers — but every company using Copilot, Claude, Gemini, or GPT is a downstream deployer of a GPAI system, and that pulls specific obligations.

What GPAI providers must do (Art 53): publish and maintain technical documentation of the model; publish a copyright policy that complies with EU copyright law, including respect for opt-outs under Directive (EU) 2019/790; publish a sufficiently detailed summary of training data. If the model has systemic risk (a FLOP-threshold definition currently set at 10^25), they must also perform model evaluation, adversarial testing, incident reporting, and cybersecurity-hardening (Art 55).

What downstream deployers get from this: Article 53(1)(b) requires providers to give you, the downstream user, the information and technical documentation you need to integrate the GPAI model into your own AI system and to comply with your obligations under the Act. This is legally binding. If your vendor will not tell you what the model was trained on, how to evaluate it, or what its known limitations are — that is a compliance problem for them, and you should escalate it in writing.

The practical implication: keep copies of every piece of model documentation your vendor sends. Track the model version you are using. When the vendor updates the model, your FRIA and technical documentation may need to update too. Model drift from version changes is a recognised risk surface under Article 72 post-market monitoring.

One distinction that matters: a GPAI model is not automatically a high-risk AI system. Running Claude for marketing copy is minimal risk. Running Claude for credit decisions is high-risk, and all Article 26 obligations kick in on your side — regardless of what the vendor documentation says.

Penalties, enforcement, and a practical compliance roadmap

Article 99 sets three fine brackets. Prohibited practices under Article 5: up to €35 million or 7% of worldwide annual turnover, whichever is higher. Non-compliance with other obligations (provider duties, Article 26 deployer duties, transparency, FRIA, GPAI obligations): up to €15 million or 3%. Supplying incorrect, incomplete, or misleading information to authorities: up to €7.5 million or 1%. SMEs and start-ups get proportionally scaled fines. These numbers are real. They are not GDPR-like hypotheticals.

Enforcement runs through national market surveillance authorities — one per member state, designated by August 2, 2025. The EU AI Office oversees GPAI with systemic risk. Coordination happens through the European AI Board. Complaints from affected individuals, whistleblowing from inside the company, and routine inspections are all live channels for triggering a review.

Where to actually start — a 12-step practical roadmap:

1. Inventory every AI system in use. Shadow AI counts. Copilot licences count. An internal ChatGPT Enterprise contract counts. You cannot classify what you cannot see.

2. Classify each system against Article 5, Annex III, Article 50, and minimal-risk. Document the reasoning, not just the answer.

3. For each high-risk system, map your operator role. Are you deployer only? Or have you modified the system enough that Article 25 makes you a provider?

4. Assign a named owner per high-risk system. This person holds Article 26(2) oversight duty.

5. Stand up AI literacy training under Article 4. Cover all staff involved in deployment and operation, not just data scientists.

6. Begin a FRIA for every Annex III system where your deployer role requires one (credit, insurance, public services, law enforcement, HR for public bodies).

7. Write an AI Usage Policy covering acceptable use, approval workflows, incident reporting, and deployer-side guardrails.

8. Build the Article 26(5) monitoring loop. Log retention, drift detection, incident escalation path.

9. Wire your Article 73 serious-incident process. Who files, within what time, to which authority. Test it with a tabletop exercise.

10. Update vendor contracts. You need Article 53(1)(b) documentation rights baked into your GPAI supplier agreements.

11. Register with the EU database where required (public bodies, Annex III deployers with public-service character).

12. Brief the board. This is a governance-level compliance programme, not an IT project. Budget for it accordingly.

On platforms: a handful of companies sell software to operationalise this work. Vanta covers EU AI Act alongside its main SOC 2 and ISO 27001 product — a trust-centre add-on rather than a purpose-built AI governance tool; we break down where it helps and where it stops short in our Fronterio vs Vanta for EU AI Act compliance comparison. Credo AI, Fairly AI, and Holistic AI each target specific slices (policy register, LLM red-teaming, statistical audit respectively); see our platform comparison for Credo, Fairly, and Holistic. Fronterio is built EU AI Act-first: Article 26 obligations, FRIA wizard, Article 73 incident workflow with 48h+15d deadline clock, post-market monitoring reports, and an auto-evidence ladder that advances six obligations deterministically from platform state.

Frequently asked questions

What is the EU AI Act?

The EU AI Act (Regulation (EU) 2024/1689) is the world's first horizontal law regulating artificial intelligence. It entered into force on August 1, 2024 and classifies AI systems into four risk tiers — unacceptable, high-risk, limited risk, and minimal risk — with corresponding obligations for providers (who build AI systems) and deployers (who use them). Most provisions become fully applicable on August 2, 2026.

When does the EU AI Act take effect?

The Act applies in phases. February 2, 2025: prohibited practices (Article 5) and AI literacy (Article 4). August 2, 2025: general-purpose AI rules and national competent authorities. August 2, 2026: general applicability of the high-risk regime (Article 26 deployer obligations, FRIAs, post-market monitoring, serious-incident reporting). August 2, 2027: embedded high-risk systems in products covered by Annex I sectoral legislation.

Who does the EU AI Act apply to?

It applies to providers, deployers, importers, and distributors of AI systems placed on the EU market or affecting people in the EU — regardless of where the company is headquartered. Most EU companies are deployers (they use AI systems under their own authority). Providers are the model makers like OpenAI and Anthropic. Importers and distributors carry lighter obligations but real liability for non-conforming systems.

What is a high-risk AI system under the EU AI Act?

High-risk AI systems are defined under Article 6 plus Annex III. Annex III lists eight sensitive domains: biometrics, critical infrastructure, education, employment and HR, essential private and public services (including credit scoring and health insurance), law enforcement, migration and border control, and administration of justice. AI used as a safety component of products under Annex I sectoral legislation (medical devices, machinery, toys, lifts, vehicles) is also high-risk.

What are the deployer obligations under Article 26?

Article 26 defines eight core duties for deployers of high-risk AI systems: (1) use the system per provider instructions, (2) assign trained human oversight, (3) ensure input data relevance, (4) monitor operation and report serious incidents within the Article 73 deadlines, (5) retain logs, (6) inform workers before workplace deployment, (7) register Annex III deployments in the EU database where required, and (8) cooperate with national authorities. Articles 4 (AI literacy) and 50 (transparency) apply in parallel.

What is a Fundamental Rights Impact Assessment (FRIA)?

A FRIA is a formal assessment required under Article 27 before a deployer puts certain high-risk AI systems into use. It is mandatory for public bodies, private entities providing public services, and deployers using Annex III systems for credit scoring or life/health insurance risk assessment. A FRIA must document the intended use, the affected persons, specific fundamental-rights risks, human oversight measures, risk-mitigation measures, and governance including complaint mechanisms. It is not a one-off artefact — it must be updated when deployment conditions change materially.

What are the penalties for EU AI Act violations?

Article 99 sets three fine brackets. Prohibited practices under Article 5: up to €35 million or 7% of worldwide annual turnover, whichever is higher. Non-compliance with other obligations (provider duties, deployer duties, FRIA, transparency, GPAI): up to €15 million or 3%. Supplying incorrect or misleading information to authorities: up to €7.5 million or 1%. SMEs and start-ups get proportionally scaled fines.

Is Vanta good for EU AI Act compliance?

Vanta is excellent for SOC 2, ISO 27001, HIPAA, and GDPR — compliance frameworks built around control inventories and evidence uploads with auditor networks attached. Its EU AI Act content sits alongside those frameworks in a trust-centre bundle. For companies where the EU AI Act is one of ten frameworks and the security-audit relationship matters most, Vanta can work. For deployers where the AI Act is the primary regulatory concern — needing auto-evidence on Article 4 literacy, Article 26 oversight, Article 27 FRIA, Article 72 post-market monitoring, and Article 73 incident workflows — a purpose-built platform like Fronterio covers the AI-specific obligations more deeply. See our Fronterio vs Vanta for EU AI Act comparison for the full breakdown.

Ready to get started?

Fronterio helps you implement everything discussed in this article — with built-in tools, automation, and guidance.