EU AI Act — What Your Company Needs to Know
The EU AI Act is the world's first comprehensive AI regulation. If your company uses AI tools in Europe, these rules apply to you. Here's a practical overview of what deployers need to do.
Key Dates
Prohibited AI practices (bans on social scoring, manipulative AI, etc.) take effect. Already active.
Governance rules and AI literacy obligations (Article 4) apply. Companies must ensure staff understand the AI tools they use. Already active.
Full application for high-risk AI systems and all deployer obligations. This is the main deadline.
Risk Categories
The EU AI Act classifies every AI system into one of four risk tiers. The tier determines what obligations apply.
Unacceptable Risk
These AI systems are prohibited outright in the EU. No exceptions.
Social scoring, real-time biometric identification in public spaces, manipulative AI targeting vulnerable groups.
High Risk
Permitted but with significant obligations: conformity assessments, human oversight plans, technical documentation, and ongoing monitoring.
AI in HR/recruitment, credit scoring, education grading, law enforcement, critical infrastructure, immigration decisions.
Limited Risk
Users must be clearly informed they are interacting with AI. No other specific obligations.
Chatbots, deepfake generators, emotion recognition systems, AI-generated content.
Minimal Risk
No specific regulatory requirements. Most business AI tools fall into this category.
Email drafting, document summarisation, code assistance, internal analytics, AI-powered search.
Deployer Obligations
If your company uses (deploys) AI systems — even if you didn't build them — these obligations apply to you.
Human Oversight
Ensure humans can monitor, interpret, and override AI decisions. Assign clear responsibility for oversight of each AI system.
AI Literacy
Train your staff to use AI systems competently. They must understand the capabilities, limitations, and risks of the AI tools they work with.
Operational Monitoring
Monitor AI systems during operation. Watch for anomalies, bias, performance degradation, or unexpected behaviour. Keep logs.
Incident Reporting
Report serious incidents involving high-risk AI to authorities without undue delay. Maintain an incident log with severity, root cause, and resolution.
Transparency
Inform individuals when they are interacting with AI or when AI is making decisions that affect them. Be clear about what the AI does and doesn't do.
Fundamental Rights Impact Assessment (FRIA)
Under Article 27, a FRIA is required when deploying high-risk AI in sensitive areas. The assessment must document the purpose, affected groups, potential impacts on fundamental rights, mitigations, and oversight measures.
Applies to: HR and recruitment, insurance underwriting, creditworthiness assessment, and public services.
Check your compliance posture
Fronterio automatically classifies your AI agents by risk tier and tracks your deployer obligations.