Skip to main content
Regulome
Search regulations…⌘K
For providersFree Checker
The Ledger · Saturday, 14 February 2026Issue № 32All issues →

AI Compliance Hub · newsroom

Regulation Analysis · 11 min read

The EU AI Act High-Risk AI System List, Annotated

Annex III of the EU AI Act lists the specific categories of high-risk AI systems. Here’s every category explained, with practical examples of what falls in and what doesn’t.

The EU AI Act High-Risk AI System List, Annotated
Regulation AnalysisIllustration · AI Compliance Hub

The EU AI Act divides AI systems into four risk tiers: unacceptable risk (banned), high risk (heavily regulated), limited risk (transparency obligations), and minimal risk (no specific rules). The high-risk category is where most compliance work happens.

High-risk AI systems are listed in Annex III of the Act. These systems must meet requirements for risk management, data governance, transparency, human oversight, accuracy, robustness, and cybersecurity — and must undergo a conformity assessment before deployment.

Here’s every Annex III category, annotated.


Category 1: Biometric Identification and Categorisation

What’s listed: AI systems used for remote biometric identification of natural persons, and AI systems intended to categorize individuals by biometric data into categories such as race, ethnicity, political views, or sexual orientation.

What’s banned (not just high-risk): Real-time remote biometric identification in public spaces for law enforcement purposes — with narrow exceptions for specific crimes, missing persons, and terrorism.

Examples in scope:

  • Facial recognition systems for identifying individuals in databases
  • Emotion recognition systems used in any regulated context
  • Gait analysis for identification

What’s out: Verification systems (confirming someone is who they claim to be, e.g., Face ID on a phone) are generally not caught by Annex III.


Category 2: Critical Infrastructure

What’s listed: AI used in management and operation of critical infrastructure — specifically road traffic, water, gas, heating, electricity supply.

Examples in scope:

  • AI systems managing power grid load balancing
  • Traffic management AI in smart city systems
  • Water treatment plant control systems with AI components

Practical note: The infrastructure itself isn’t regulated — the AI component used in its management is. This requires working through your infrastructure stack to identify where AI makes or influences decisions.


Category 3: Education and Vocational Training

What’s listed: AI systems used to determine access to or admission to educational institutions, assess students, detect prohibited student behavior during tests.

Examples in scope:

  • AI systems that score admissions applications
  • Automated proctoring systems that flag cheating during exams
  • AI that grades assignments or determines who advances

What’s out: Recommendation systems for educational content (e.g., “students who struggled with X should try Y”) are not high-risk.


Category 4: Employment and Workers Management

What’s listed: AI used in recruitment (CV screening, interview evaluation), employment decisions (promotion, termination), task allocation, monitoring and evaluation of performance.

Examples in scope:

  • CV screening tools that rank or reject applicants
  • Video interview analysis tools that score candidates
  • Productivity monitoring systems that influence employment decisions
  • Performance management AI that recommends bonuses or terminations

This is the category that triggers NYC LL 144. US employers subject to NYC’s hiring law are already dealing with a US-equivalent regulation for the employment AI subset.


Category 5: Access to Essential Private Services and Public Benefits

What’s listed: AI used to evaluate eligibility for public benefits or services; AI used in credit scoring; AI in life and health insurance risk assessment and pricing.

Examples in scope:

  • Credit scoring algorithms used in lending decisions
  • Systems that determine eligibility for government benefits
  • AI underwriting tools for life, health, or disability insurance
  • Rental application screening using AI

The breadth here: Any AI that influences someone’s access to credit, housing, insurance, or public benefits is potentially in this category. This covers a wide swath of fintech, insurtech, and proptech applications.


Category 6: Law Enforcement

What’s listed: AI used for risk assessments for individual criminal recidivism, polygraph testing, evaluation of evidence reliability, profiling of natural persons in criminal investigations.

Examples in scope:

  • Predictive policing tools
  • Recidivism risk scoring tools used by courts or parole boards
  • AI analysis of CCTV footage to identify suspects

Heightened scrutiny: Law enforcement AI faces some of the strictest requirements under the Act, including mandatory human oversight and detailed logging.


Category 7: Migration, Asylum, and Border Control

What’s listed: AI systems for risk assessment of persons crossing borders, document authenticity verification, examination of asylum applications, predicting migration patterns.

Examples in scope:

  • Automated entry-denial systems
  • AI systems that score asylum claim credibility
  • Document fraud detection AI at borders

Category 8: Administration of Justice and Democratic Processes

What’s listed: AI systems intended to assist judicial authorities in researching and interpreting facts and law, and applying the law to concrete sets of facts; AI in electoral and voting systems.

Examples in scope:

  • AI legal research tools used in judicial proceedings
  • Case outcome prediction tools used by courts
  • AI systems used in vote counting or election management

Note: This category does NOT cover legal research tools used by lawyers in private practice — it covers tools deployed by judicial authorities themselves.


The Conformity Assessment Requirement

All high-risk AI systems in these categories must undergo a conformity assessment before being placed on the market. For most categories, providers can conduct this themselves (internal assessment). For biometric identification and law enforcement AI, a third-party notified body assessment is required.

The conformity assessment produces technical documentation and a declaration of conformity. Providers must register the system in the EU AI Act database before deployment.


What This Means for US Companies

If your AI product or service is used in any EU member state and falls into one of these categories, you are in scope — regardless of where your company is based.

Practical first step: Map your AI systems to these categories. Anything that matches is high-risk and needs a compliance program, not just a policy.

Tagged regulations
EU AI ActHigh-Risk AIComplianceAnnex III
AI Compliance Hub editors
The editorial desk covers AI and cyber regulation across the US, EU, and UK. Tips? editors@aicompliancehub.com
Not legal advice

This article is for informational purposes only and does not constitute legal advice. Always consult qualified counsel before making compliance decisions. Try the free compliance checker →

← Back to The Ledger

Keep the Ledger coming.

A weekly edition of new regulations, enforcement actions, and compliance deadlines — delivered every Friday. Free forever. No tracking pixels.

Subscribe free →

Read by 4,000+ compliance teams · Cancel any time