⚠ Colorado AI Act deadline: June 30, 2026 — 47 days away. High-risk AI deployers must comply or face enforcement.
regulome.io — AI Compliance Intelligence 5 Frameworks · Updated May 2026 · Download PDF ↓
Free Resource 5 Frameworks · EU AI Act, ISO 42001, Colorado, NIST, NYC

The Complete AI Compliance
Checklist for 2026

The definitive AI compliance checklist covering every major framework your organization may face — EU AI Act, ISO 42001, Colorado AI Act, NIST AI RMF, and NYC Local Law 144. Track your readiness across all five, download the PDF, and know exactly what steps remain before the next enforcement deadline.

Download the AI Compliance Checklist PDF A4 & Letter · Printable · 5 Frameworks · 46 Items · Free
5
Frameworks
46
Checklist items
47
Days to CO deadline
Free
No signup required
Context

Why AI compliance is urgent in 2026

2026 marks the year that AI compliance shifted from aspiration to legal obligation. Multiple binding frameworks are now in force or entering enforcement — and organizations that delay risk penalties, procurement disqualification, and reputational damage.

Colorado AI Act deadline: June 30, 2026 — 47 days away

Colorado SB 24-205 requires deployers of high-risk AI systems that make consequential decisions in employment, education, financial services, healthcare, and housing to have impact assessments, consumer notice mechanisms, and grievance processes in place. The law applies to any company doing business in Colorado — not just Colorado-headquartered firms. See the Colorado checklist below.

The EU AI Act's high-risk AI provisions for Annex III systems apply from August 2, 2026 — less than three months after Colorado's deadline. Organizations deploying AI in hiring, credit, education, essential services, or law enforcement in the EU must have conformity assessments, technical documentation, human oversight mechanisms, and registration in the EU database completed by that date.

Globally, ISO/IEC 42001 has emerged as the de facto AI management system standard, with enterprise procurement increasingly requiring suppliers to demonstrate AIMS certification or equivalence. Organizations that treat ISO 42001 as a bolt-on will struggle — those that embed it into operations will find it simplifies EU AI Act and NIST AI RMF compliance simultaneously.

Overview

Five frameworks, one compliance picture

This artificial intelligence compliance checklist covers every major framework a global organization is likely to face. Use the table to understand scope and binding status before working through the framework-specific checklists below.

Framework Jurisdiction Binding? Who it covers Key deadline
EU AI Act European Union Binding Providers & deployers of AI systems placed on EU market or used in EU Aug 2, 2026 (high-risk)
ISO/IEC 42001 Global Voluntary Any organization developing, providing, or using AI systems Ongoing (certifications)
Colorado AI Act Colorado, USA Binding Deployers of high-risk AI affecting Colorado consumers Jun 30, 2026
NIST AI RMF United States Voluntary US federal agencies and government contractors; widely adopted by private sector No hard deadline
NYC Local Law 144 New York City, USA Binding Employers using automated employment decision tools for NYC-based roles In force since Jul 2023
AI Compliance Checklist by Framework

Actionable checklist across all five frameworks

Work through each framework systematically. Click any checkbox to track your progress. The total across all frameworks is 46 items. Use the Download PDF button to export a printable version for your compliance team.

Overall Compliance Progress 0 / 46
ISO
Framework 01 · International Standard
ISO/IEC 42001 Checklist
AI Management System · 10 key requirements

ISO 42001 establishes the requirements for an AI Management System (AIMS). Unlike a point-in-time audit, it demands a continual improvement cycle across governance, risk, operations, and performance evaluation. The 10 items below represent the highest-impact requirements for initial implementation. For the complete 93-item clause-by-clause checklist, see our dedicated ISO 42001 checklist page.

  • Define your AIMS scope and AI policy Clauses 4 & 5 — Document which AI systems are in scope, appoint top management accountability, and publish a signed AI policy committing to responsible AI development.
  • Complete an AI risk and impact assessment Clause 6 — Identify risks posed by each AI system in scope, assess probability and severity of harms, and document opportunities alongside risks.
  • Set measurable AI objectives aligned to policy Clause 6.2 — Define specific, time-bound AI objectives (e.g., bias audit completion rate, model card publication) with named owners and success metrics.
  • Allocate resources and establish AI competency Clause 7 — Ensure personnel working on AI systems have documented competence. Maintain training records. Communicate AI governance expectations across the organization.
  • Implement AI system lifecycle controls Clause 8 — Document operational planning for each AI system from development through decommission. Apply controls for data quality, model testing, and change management.
  • Manage third-party AI suppliers and partners Clause 8.4 — Assess AI-related risks in your supply chain. Ensure suppliers meet AIMS requirements through contracts, audits, or third-party attestations.
  • Monitor and measure AI system performance Clause 9 — Establish KPIs for AI system behavior (accuracy, fairness, drift). Conduct periodic evaluations and document results. Feed findings into management review.
  • Conduct an internal ISO 42001 audit Clause 9.2 — Schedule and complete an internal audit against all Annex A controls before any third-party certification assessment. Document findings and corrective actions.
  • Complete a management review of the AIMS Clause 9.3 — Hold a formal management review at planned intervals. Review audit results, objective progress, and resource adequacy. Document decisions and action items.
  • Address nonconformities and drive continual improvement Clause 10 — When a nonconformity is found, document root cause, implement corrective action, and verify effectiveness. Log improvements to the AIMS over time.
EU
Framework 02 · Binding Regulation
EU AI Act Checklist
Regulation (EU) 2024/1689 · 10 key requirements by risk tier

The EU AI Act is the world's first comprehensive AI regulation. Obligations scale by risk tier. All EU AI Act-covered organizations must first classify their systems; from there, high-risk systems (Annex III) face the most extensive requirements. Prohibited practices have been banned since February 2, 2025. Full high-risk obligations apply from August 2, 2026.

Step 1: Classification & prohibited practices
  • Classify all AI systems by risk tier Determine whether each system is prohibited, high-risk (Annex III), limited-risk, or minimal-risk. Document the classification rationale. This step unlocks all subsequent obligations.
  • Cease any prohibited AI practices immediately Banned since Feb 2, 2025: real-time remote biometric surveillance in public, social scoring by governments, emotion recognition in workplaces/schools, manipulative subliminal techniques, and AI-based exploitation of vulnerabilities.
Step 2: High-risk AI systems (Annex III)
  • Implement a risk management system Article 9 — Establish a continuous risk management process throughout the AI system lifecycle. Identify, estimate, evaluate, and adopt risk mitigation measures.
  • Establish data governance practices Article 10 — Training, validation, and testing data must be subject to data governance including relevance, representativeness, and freedom from bias. Document data lineage.
  • Create and maintain technical documentation Article 11 & Annex IV — Prepare technical documentation before placing the system on the market. Includes system description, design logic, training data specs, and validation results.
  • Enable logging and record-keeping Article 12 — High-risk AI systems must automatically log events throughout operation sufficient to ensure traceability. Deployers must retain logs for at least 6 months.
  • Provide transparency and user instructions Article 13 — High-risk AI must be sufficiently transparent for deployers to interpret output and use it appropriately. Provide instructions for use, including limitations and foreseeable misuse.
  • Implement human oversight measures Article 14 — Design systems to enable effective human oversight. Humans overseeing the system must be able to understand its capabilities, detect failures, intervene, and override outputs.
  • Conduct a conformity assessment and affix CE marking Article 43 — Before market placement, complete a conformity assessment (self-assessment or third-party). Affix CE marking, draw up EU Declaration of Conformity, and register in the EU database.
  • Establish post-market monitoring and incident reporting Article 72 — Providers must actively monitor deployed high-risk AI systems and report serious incidents to national authorities without undue delay (within 15 days for deaths/serious harm).
CO
Framework 03 · Binding State Law · ⚠ Deadline June 30, 2026
Colorado AI Act Checklist
SB 24-205 · 8 key requirements for deployers of high-risk AI
47 days to June 30, 2026 enforcement deadline

The Colorado AI Act applies to any business deploying a high-risk AI system that makes or substantially influences consequential decisions affecting Colorado residents — regardless of where the company is headquartered. Consequential decisions include employment actions, educational opportunities, financial products, healthcare, housing, insurance, and legal services.

A "high-risk AI system" under Colorado law is one that makes, or is a substantial factor in making, a consequential decision. Deployers — not just developers — bear primary obligations. If you use a third-party AI tool that influences employment, loan, or housing decisions for Colorado residents, you are a deployer.

  • Identify all high-risk AI systems in your stack Map every AI system that makes or substantially influences consequential decisions affecting Colorado residents. Include vendor-provided tools. Prioritize employment, credit, healthcare, and housing contexts.
  • Complete an AI impact assessment for each high-risk system Conduct an impact assessment covering the system's intended purpose, known limitations, potential for algorithmic discrimination, and the metrics used to evaluate performance. Document and retain.
  • Implement a risk management program Establish policies and procedures to manage algorithmic discrimination risks. This must be a documented program — not ad hoc reviews — with designated ownership and review cadence.
  • Provide consumer notice before consequential decisions Notify Colorado consumers that an AI system is being used to make or assist a consequential decision affecting them. Notice must be given prior to or contemporaneous with the decision.
  • Disclose the type of AI system and its role in the decision Consumers have the right to know the nature of the AI system used. Upon request, provide the principal reason(s) for the consequential decision and the data that contributed to it.
  • Establish a grievance and appeal process Consumers must have a meaningful opportunity to appeal consequential decisions made using high-risk AI. Create a formal process for receiving, reviewing, and responding to such appeals.
  • Conduct annual bias and discrimination reviews Perform an annual review of each high-risk AI system for algorithmic discrimination. Consider engaging third-party auditors for objectivity. This is closely related to AI bias audit requirements.
  • Publish a public statement on high-risk AI use Post a clear public statement summarizing the types of high-risk AI systems deployed and how you manage associated risks. Typically published on your website privacy or AI governance page.
NIST
Framework 04 · US Federal Standard
NIST AI RMF Checklist
AI Risk Management Framework 1.0 · 8 core function requirements

The NIST AI Risk Management Framework (AI RMF 1.0) organizes AI risk management into four core functions: Govern, Map, Measure, and Manage. While voluntary, it is referenced in federal procurement, increasingly incorporated into sector regulations, and provides excellent scaffolding for organizations building toward EU AI Act or Colorado AI Act compliance. The 8 items below cover the highest-impact actions across all four functions.

  • GOVERN: Establish AI governance policies and accountability Define organizational policies for responsible AI development and deployment. Assign accountability at the executive level. Document roles and responsibilities across the AI lifecycle.
  • GOVERN: Create a culture of AI risk awareness Train all teams involved with AI systems on risk concepts, ethical considerations, and the organization's AI policies. Include technical, product, legal, and procurement staff.
  • MAP: Categorize and contextualize AI risks For each AI system, document its intended use, affected stakeholders, deployment context, and the potential harms. Use this context to prioritize risk management resources appropriately.
  • MAP: Identify and engage affected stakeholders Map internal and external parties affected by AI system decisions. Include end users, impacted communities, and third-party partners. Gather input on values and concerns before deployment.
  • MEASURE: Evaluate AI risks with quantitative and qualitative methods Use a combination of testing, red-teaming, benchmarking, and bias evaluation to measure AI risks. Document methodologies, results, and any residual risks accepted after mitigation.
  • MEASURE: Track trustworthiness characteristics over time Monitor accuracy, fairness, explainability, privacy, robustness, and security of AI systems post-deployment. Establish baselines and alert thresholds for drift or degradation.
  • MANAGE: Prioritize and treat identified AI risks Implement mitigation controls for risks prioritized in the Map and Measure stages. Document treatment decisions, including accepted residual risks with named risk owners.
  • MANAGE: Establish incident response for AI failures Create an AI incident response plan covering detection, triage, escalation, stakeholder communication, and post-incident review. Test the plan at least annually. Link to your broader risk management program.
GOV
Framework 05 · Universal Baseline
General AI Governance Checklist
10 universal items every AI-using organization should complete

Regardless of which specific regulations apply to your organization, these 10 items represent baseline AI governance that every organization deploying AI systems should complete. They underpin compliance with every framework above and build the organizational maturity that makes regulatory compliance sustainable rather than a one-time scramble.

  • Maintain a complete AI system inventory Know every AI system in use across your organization — including embedded AI in SaaS tools, vendor-provided models, and internally built systems. Inventory is the foundation of all compliance work.
  • Assign an AI governance owner or committee Designate a named individual or cross-functional committee responsible for AI governance. Without clear ownership, compliance tasks fall through the cracks under deadline pressure.
  • Publish an AI use policy for employees Create and distribute an internal policy covering approved and prohibited uses of AI tools, data handling requirements, and escalation paths for novel AI use cases. Review annually.
  • Implement AI-specific data governance Extend your data governance framework to cover AI training data, inference data, and model outputs. Address data quality, consent, retention, and cross-border transfer requirements.
  • Conduct AI bias and fairness testing before deployment Test every AI system that affects individuals for demographic bias before go-live. Document the methodology, datasets used, results, and any remediation actions taken. See AI bias audit guidance.
  • Create model cards or system cards for significant AI systems Document each significant AI system's intended use, training approach, performance characteristics, limitations, and known biases. Make model cards available to downstream deployers and affected parties.
  • Ensure meaningful human oversight for consequential decisions For decisions that materially affect people (hiring, lending, healthcare triage), ensure a human can understand, challenge, and override AI outputs. Document the oversight mechanism and train the responsible staff.
  • Review AI vendor contracts for compliance obligations Ensure contracts with AI vendors address: data processing terms, bias testing obligations, audit rights, incident notification requirements, and liability allocation. Renegotiate gaps before deadline pressure forces concessions.
  • Establish an AI incident log and lessons-learned process Track all AI-related incidents, near-misses, and unexpected outputs. Review the log quarterly. Feed findings into your risk management process and model retraining decisions.
  • Schedule annual regulatory horizon scanning The AI regulatory landscape is evolving rapidly. Assign someone to monitor new legislation, guidance, and enforcement actions — in your jurisdictions and globally — at least annually. Use resources like Regulome Observatory to stay current.
Decision Guide

Which framework applies to you?

Use these four decision criteria to identify your primary compliance obligations. Most organizations will face multiple frameworks simultaneously — start with the binding obligations that have the nearest deadlines.

If you operate in the EU or sell AI to EU customers

Prioritize the EU AI Act

Any AI system placed on the EU market or used by an EU-based deployer falls under EU AI Act jurisdiction — regardless of where the provider is headquartered. Classify your systems by risk tier first.

If your AI affects Colorado residents' consequential decisions

Prioritize the Colorado AI Act — deadline June 30

Employment, lending, insurance, healthcare, or housing decisions involving Colorado residents trigger SB 24-205. You have 47 days. Impact assessments and grievance processes must be in place.

If you work with US federal agencies or want a universal governance framework

Adopt the NIST AI RMF

The NIST AI RMF is the dominant voluntary framework in the US. Federal contractors may face mandatory NIST AI RMF alignment. It also maps well to ISO 42001 and simplifies EU AI Act compliance.

If you want certification, supply chain credibility, or a global baseline

Pursue ISO 42001 certification

ISO 42001 is becoming a procurement requirement and provides documented, auditable evidence of AI governance maturity. Its management system structure maps directly to EU AI Act Annex IV documentation requirements.

Compliance Calendar

Upcoming AI compliance deadlines

These are the binding deadlines every AI-using organization must track in 2026. Bookmark this page — Regulome updates it as new regulatory guidance is issued.

FAQ

Frequently asked questions

Answers to the most common questions about AI regulatory compliance and how to use this checklist.

Ready to close your compliance gaps?

Download the complete AI compliance checklist as a PDF, or find certified compliance providers and auditors in the Regulome directory.