When the EU AI Act passed in 2024, it became the world’s first comprehensive AI regulation. The UK, fresh from Brexit, took a different path. Understanding both is essential for companies operating in Europe.
The Core Difference: One Law vs. Many Principles
The EU chose a hard-law approach: a single regulation that applies across all sectors, with specific obligations, timelines, penalties, and an enforcement body (the EU AI Office).
The UK chose a principles-based approach: the government published principles for AI governance (safety, security, fairness, accountability, transparency), then directed existing sector regulators — the FCA for financial services, the ICO for data protection, the CMA for competition — to apply those principles in their own domains.
The UK AI Safety Institute (now the UK AI Safety Institute / AISI) focuses on frontier model evaluation and research, not market regulation.
Comparison Table
| Dimension | EU AI Act | UK Approach |
|---|---|---|
| Legal form | Binding regulation | Non-binding principles + sectoral guidance |
| Scope | All AI across all sectors | Principles apply sector-by-sector |
| High-risk rules | Mandatory conformity assessment, registration, oversight | Guidance varies by sector regulator |
| Foundation models | GPAI obligations in force Aug 2025 | Voluntary safety commitments for frontier models |
| Enforcement | EU AI Office + national market surveillance | Existing sector regulators (FCA, ICO, CMA, etc.) |
| Penalties | Up to €35M or 7% global turnover | Existing regulatory penalties per sector |
| Certification | Conformity assessment required for high-risk | No mandatory certification regime |
| Timeline | Phased from 2024–2027 | No fixed implementation timeline |
Where the Obligations Overlap
Despite the structural differences, several requirements appear in both regimes:
Transparency: Both require disclosure when AI is making or influencing significant decisions. The EU AI Act requires this for high-risk systems. UK regulators (especially the ICO under UK GDPR) require explanations for automated decisions.
High-risk use cases: The EU Annex III categories — hiring, credit, benefits, law enforcement — are also priority areas for UK sector regulators. If you’re compliant with EU AI Act high-risk requirements, you’re likely meeting the spirit of UK expectations.
Data governance: Both regimes layer on top of GDPR (EU) / UK GDPR. Data quality, bias management, and lawful basis requirements apply in both jurisdictions.
Human oversight: The EU AI Act mandates human oversight for high-risk systems. UK guidance consistently emphasizes meaningful human control in regulated sectors.
Where They Diverge
Foundation models: The EU has binding GPAI obligations. The UK relies on voluntary frontier safety commitments from major developers. Companies releasing foundation models face real legal obligations in the EU but softer expectations in the UK.
Conformity assessment: The EU requires documented self-assessment (or third-party audit) before deploying high-risk AI. The UK has no equivalent mandatory step.
Banned AI: The EU outright bans certain practices (social scoring, real-time biometric surveillance, manipulation). The UK has no equivalent banned categories — these would be handled under existing law (consumer protection, privacy, etc.).
Enforcement body: The EU AI Office is a dedicated regulator with cross-border enforcement powers. The UK has no single AI regulator; enforcement depends on which sector regulator is in play.
Practical Implications for Companies
Operating in both markets? Build to EU AI Act standards. EU requirements are more prescriptive and will establish your documentation, governance, and oversight baseline. UK expectations are generally satisfied if you meet EU Act requirements.
EU only? Full compliance program required. Follow the phased timeline, complete conformity assessments for high-risk systems, and register in the EU AI Act database.
UK only? Understand which sector regulators apply to your AI use case and what guidance they’ve issued. ICO guidance on AI and automated decision-making is the most developed.
The Convergence Risk
The UK government has signaled it may introduce more formal AI regulation as the sector matures. The EU AI Act creates a gravitational center — companies compliant with it are well-positioned wherever AI regulation evolves. Building to EU standards now is a defensible strategy in both markets.
This article is for informational purposes only and does not constitute legal advice. Always consult qualified counsel before making compliance decisions. Try the free compliance checker →
Keep the Ledger coming.
A weekly edition of new regulations, enforcement actions, and compliance deadlines — delivered every Friday. Free forever. No tracking pixels.
