Overview
ISO/IEC 42001:2023 is the world's first international standard for Artificial Intelligence Management Systems (AIMS). Published on December 18, 2023 by the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC), it defines requirements for establishing, implementing, maintaining, and continually improving an AI management system within an organization.
ISO 42001 follows the Annex SL high-level structure — the same management system architecture used by ISO 27001 (information security), ISO 9001 (quality), and ISO 14001 (environmental management). This means organizations already certified to other ISO management system standards can integrate ISO 42001 into their existing governance infrastructure with significantly less effort.
Unlike voluntary frameworks such as the NIST AI RMF, ISO 42001 is a certifiable standard: organizations can undergo independent third-party audits and receive formal certification that demonstrates their AI governance meets internationally recognized requirements. This certification is increasingly valuable for enterprise sales, regulatory alignment, and stakeholder trust.
The standard applies to any organization that provides or uses AI-based products or services, regardless of size, type, or sector. It is technology-agnostic and designed to accommodate the full range of AI systems — from traditional machine learning models to generative AI and autonomous systems.
Who It Applies To
ISO 42001 is a voluntary standard — no regulation mandates adoption. However, several categories of organizations have compelling reasons to pursue certification:
Organizations That Should Consider ISO 42001
- AI product and service providers — SaaS companies, AI platform vendors, and technology providers whose customers require demonstrable AI governance as part of procurement due diligence
- Organizations subject to the EU AI Act — ISO 42001 is a candidate harmonized standard; certification may provide a presumption of conformity with certain EU AI Act requirements
- Enterprises with existing ISO certifications — organizations already certified to ISO 27001 or ISO 9001 can extend their management systems to cover AI governance with reduced incremental effort
- Organizations selling into regulated industries — financial services, healthcare, and defense procurement increasingly require vendors to demonstrate structured AI governance
- Companies preparing for regulatory convergence — as AI regulations mature globally, ISO 42001 certification provides a framework-agnostic governance credential that travels across jurisdictions
Roles Defined by the Standard
ISO 42001 defines responsibilities across the organization:
- Top management — accountable for the AIMS, including AI policy, objectives, resource allocation, and management review
- AI system owners — responsible for individual AI system lifecycle management, risk assessment, and performance monitoring
- Internal auditors — conduct periodic audits of the AIMS to verify ongoing conformance
- Risk managers — assess and treat AI-specific risks using the organization's risk assessment methodology
- Data governance leads — ensure data quality, provenance, and privacy for AI system training and operation
Management System Structure
ISO 42001 is organized around the Plan-Do-Check-Act (PDCA) cycle, following the Annex SL high-level structure shared by all modern ISO management system standards.
Clause 4 — Context of the Organization
Establishes the foundation for the AIMS:
- External and internal issues — identifying factors that affect the organization's AI objectives, including regulatory requirements, market expectations, and technological context
- Interested parties — determining stakeholders (regulators, customers, employees, affected communities) and their requirements
- Scope — defining the boundaries of the AIMS, including which AI systems, processes, and organizational units are covered
- Management system — establishing the AIMS and its processes
Clause 5 — Leadership
Requires top management commitment:
- Leadership and commitment — top management must demonstrate active involvement in the AIMS, not merely delegate it
- AI policy — establishing an AI policy appropriate to the organization's purpose, including commitments to compliance, responsible AI, and continual improvement
- Roles and responsibilities — assigning and communicating AIMS roles, responsibilities, and authorities
Clause 6 — Planning
Addresses risk and opportunity management:
- AI risk assessment — establishing a systematic methodology for identifying and assessing AI-specific risks across the AI system lifecycle
- AI risk treatment — selecting controls (including from Annex A) to address identified risks and producing a Statement of Applicability
- AI objectives — setting measurable AI governance objectives at relevant functions and levels
Clause 7 — Support
Ensures the AIMS has the resources it needs:
- Resources — personnel, tools, infrastructure, and budget for AI governance
- Competence — ensuring personnel performing AIMS roles have appropriate AI-related competence
- Awareness — employees must be aware of the AI policy, their AIMS responsibilities, and consequences of non-conformance
- Communication — internal and external communications about the AIMS
- Documented information — creating and maintaining the documentation required by the standard
Clause 8 — Operation
Covers the execution of AI governance:
- Operational planning and control — implementing the risk treatment plans and AI objectives established in Clause 6
- AI risk assessment execution — conducting risk assessments at defined intervals and when significant changes occur
- AI risk treatment execution — implementing selected controls and verifying their effectiveness
- AI system lifecycle — managing AI systems through design, development, deployment, operation, monitoring, and retirement
Clause 9 — Performance Evaluation
Measures and monitors AIMS effectiveness:
- Monitoring, measurement, analysis, and evaluation — determining what to measure, how to measure it, and when
- Internal audit — periodic audits to verify the AIMS conforms to planned arrangements and the standard's requirements
- Management review — top management reviews the AIMS at planned intervals to ensure continuing suitability, adequacy, and effectiveness
Clause 10 — Improvement
Drives continual improvement:
- Nonconformity and corrective action — responding to nonconformities, investigating root causes, and implementing corrections
- Continual improvement — systematically improving the suitability, adequacy, and effectiveness of the AIMS
AI & ISO 42001 Intersection
ISO 42001 is specifically designed for AI — but understanding how it intersects with the broader AI regulatory and standards landscape is critical for compliance teams.
The Standard as a Governance Layer
ISO 42001 is increasingly functioning as the certifiable governance layer that sits above risk management frameworks and below regulatory compliance:
- EU AI Act — ISO 42001 is a candidate harmonized standard under the AI Act. When formally listed in the Official Journal, certified organizations may benefit from a presumption of conformity for quality management and governance requirements
- NIST AI RMF — the four NIST functions (Govern, Map, Measure, Manage) map to ISO 42001's PDCA clauses, enabling organizations to implement NIST AI RMF practices within an ISO 42001 management system
- National AI strategies — multiple countries (UK, Singapore, Japan, South Korea) reference ISO 42001 in their AI governance guidance
Mapping ISO 42001 to AI Regulations
| AI Regulation | ISO 42001 Alignment | Key Clauses |
|---|---|---|
| EU AI Act (quality management) | Strong — AIMS provides required quality management system | Clauses 5, 6, 7, 8 |
| EU AI Act (risk management) | Strong — AI risk assessment and treatment processes | Clauses 6.1, 8.2, 8.3 |
| EU AI Act (conformity assessment) | Certification supports conformity assessment evidence | Clause 9 + certification |
| Colorado AI Act (governance program) | Satisfies governance program expectations | Clauses 5, 6, 7, 8 |
| GDPR (data protection) | Complementary — data management controls in Annex A | Annex A data controls |
| NIST AI RMF (Govern) | Direct mapping to leadership and planning clauses | Clauses 5, 6 |
| NIST AI RMF (Map) | Maps to context analysis and risk identification | Clauses 4, 6.1 |
| NIST AI RMF (Measure) | Maps to performance evaluation and monitoring | Clause 9 |
| NIST AI RMF (Manage) | Maps to risk treatment and operational controls | Clauses 8, 10 |
Companion Standards
ISO 42001 is part of a growing family of AI standards from ISO/IEC JTC 1/SC 42:
- ISO/IEC 23894:2023 — AI risk management guidance (supports ISO 42001 Clause 6 implementation)
- ISO/IEC 42005 — AI impact assessment (under development; supports Clause 6 risk assessment)
- ISO/IEC 42006 — requirements for certification bodies auditing ISO 42001 (ensures audit quality)
- ISO/IEC 22989:2022 — AI concepts and terminology (defines shared vocabulary)
- ISO/IEC 23053:2022 — framework for AI systems using machine learning
- ISO/IEC 38507:2022 — governance implications of the use of AI by organizations
Annex A Controls
Annex A of ISO 42001 provides the normative set of AI-specific controls that organizations must consider. Organizations document their control selections in a Statement of Applicability (SoA).
AI Policy and Organization Controls
- AI policy statement — establishing and communicating the organization's approach to responsible AI
- Roles and responsibilities for AI — defining accountability for AI system ownership, risk management, and oversight
- AI governance structure — establishing committees, review boards, or equivalent governance mechanisms
- Competence and training — ensuring personnel have appropriate AI-related skills and knowledge
AI Risk Assessment Controls
- Risk identification — systematic identification of risks specific to AI systems, including bias, fairness, safety, transparency, and explainability risks
- Risk analysis and evaluation — assessing likelihood and impact of identified AI risks against organizational risk criteria
- Risk treatment selection — choosing appropriate responses: implement controls, accept, transfer, or avoid the risk
AI System Lifecycle Controls
- Requirements and design — documenting AI system requirements including intended purpose, constraints, performance criteria, and ethical considerations
- Data management — controls for data quality, provenance, labeling, bias assessment, and privacy throughout the data lifecycle
- Model development — version control, experimentation tracking, validation, and documentation of model development processes
- Testing and validation — pre-deployment testing including performance evaluation, bias testing, adversarial testing, and validation against intended purpose
- Deployment — controlled deployment processes including rollback capabilities, monitoring setup, and stakeholder communication
- Monitoring and operation — ongoing monitoring of AI system performance, drift detection, incident response, and change management
- Retirement — controlled decommissioning of AI systems including data retention, transition planning, and stakeholder notification
Third-Party and Supply Chain Controls
- Supplier assessment — evaluating AI-related risks from third-party AI components, models, and data
- Contractual controls — including AI governance requirements in supplier contracts
- Ongoing monitoring — monitoring third-party AI components for changes, vulnerabilities, and performance degradation
Transparency and Documentation Controls
- AI system documentation — maintaining comprehensive records of AI system design, development, testing, and operational decisions
- Transparency to stakeholders — communicating AI system capabilities, limitations, and decision-making processes to affected parties
- Record keeping — maintaining audit trails and records sufficient to demonstrate AIMS conformance
Certification Process
ISO 42001 certification follows a structured process managed by accredited certification bodies.
Pre-Certification
- Gap analysis — assess your current AI governance practices against ISO 42001 requirements. Identify gaps and create a remediation plan.
- AIMS implementation — implement the management system: policies, risk assessments, controls, documentation, and internal audit processes.
- Internal audit — conduct at least one full internal audit cycle to verify AIMS conformance before the certification audit.
- Management review — top management must review the AIMS and confirm its suitability, adequacy, and effectiveness.
Stage 1 Audit (Documentation Review)
- Duration: 1–2 days
- Focus: reviewing AIMS documentation, policies, risk assessments, Statement of Applicability, and evidence of implementation
- Outcome: the certification body confirms readiness for Stage 2 or identifies areas requiring remediation
Stage 2 Audit (On-Site Assessment)
- Duration: 2–5 days depending on scope and organization size
- Focus: verifying that the AIMS is effectively implemented and operating as documented. Auditors interview staff, review records, observe processes, and assess control effectiveness
- Outcome: certification granted (with or without minor nonconformities) or major nonconformities requiring resolution before certification
Post-Certification
- Surveillance audits — conducted annually (typically 1–2 days) to verify ongoing conformance
- Recertification audit — full reassessment every three years to renew the certificate
- Continual improvement — organizations must demonstrate ongoing improvement of their AIMS between audits
Certification Bodies
As of 2026, several accredited certification bodies offer ISO 42001 certification:
- BSI (British Standards Institution) — among the first to offer ISO 42001 certification globally
- Bureau Veritas — offers integrated AI management system audits
- DNV — combined ISO 42001 and ISO 27001 audit programs
- TÜV — German-based certification body active in EU AI Act readiness assessments
- SGS — global certification body with AI governance practice
Compliance Timeline
| Date | Milestone |
|---|---|
| December 18, 2023 | ISO/IEC 42001:2023 published |
| Q1 2024 | First certification bodies begin offering ISO 42001 audits |
| June 2024 | BSI issues first ISO 42001 certificates globally |
| 2024 (ongoing) | Certification body accreditation programs roll out via IAF members |
| August 2025 | EU AI Act GPAI obligations apply — ISO 42001 alignment relevant for code of practice |
| 2025–2026 | EU standardization request: ISO 42001 harmonization process underway for EU AI Act |
| August 2026 | EU AI Act main obligations apply — harmonized standard status expected for ISO 42001 |
| June 2026 | Colorado AI Act enforcement begins — ISO 42001 certification supports governance evidence |
| August 2027 | EU AI Act full enforcement — ISO 42001 certification streamlines conformity assessment |
Regulatory Alignment
EU AI Act
ISO 42001 is a candidate harmonized standard under the EU AI Act. When formally harmonized:
- Organizations certified to ISO 42001 may benefit from a presumption of conformity with certain EU AI Act requirements, particularly those related to quality management systems (Article 17) and risk management (Article 9)
- Certification does not replace the conformity assessment requirement for high-risk AI systems, but it provides substantial evidence for the assessment
- ISO 42001's management system structure aligns with the EU AI Act's requirement for providers to establish and maintain quality management systems proportionate to the risk level of their AI systems
NIST AI RMF
ISO 42001 and NIST AI RMF are complementary, not competing:
- NIST AI RMF provides practical, principles-based risk management guidance — the "what to do"
- ISO 42001 provides the certifiable management system structure — the "how to prove you do it"
- Organizations commonly use NIST AI RMF as their substantive risk management methodology within an ISO 42001 management system
GDPR
ISO 42001 complements GDPR compliance for AI systems that process personal data:
- ISO 42001's data management controls address data quality, provenance, and lifecycle management
- The standard's risk assessment process covers privacy risks alongside other AI-specific risks
- Certification provides evidence of systematic data governance that supports GDPR accountability principles
State AI Laws (US)
While no US state law specifically references ISO 42001, certification demonstrates:
- The existence of a structured AI governance program (relevant to Colorado AI Act's governance expectations)
- Systematic risk assessment processes (relevant to impact assessment requirements)
- Third-party validation of AI governance practices (supports "reasonable care" arguments)
Implementation Steps
Use this roadmap to implement ISO 42001 in your organization:
-
Assess your starting point. Determine your current AI governance maturity. Organizations with existing ISO management systems (27001, 9001) will find significant overlap in Clauses 4–10 and can focus on AI-specific requirements. Organizations starting from scratch should plan for 6–18 months of implementation.
-
Define your AIMS scope. Determine which AI systems, organizational units, and processes are within scope. Start with a scope that is manageable — you can expand later. Document the scope in alignment with Clause 4.3.
-
Establish leadership commitment (Clause 5). Secure top management sponsorship. Appoint an AIMS owner. Draft and approve your AI policy. Define roles and responsibilities for AI governance.
-
Conduct AI risk assessment (Clause 6). Establish your AI risk assessment methodology. Identify risks across all in-scope AI systems — including bias, fairness, safety, transparency, security, and data quality risks. Evaluate risks against your organizational risk criteria.
-
Select and implement controls (Annex A). Review Annex A controls against your identified risks. Select applicable controls and document justifications for any exclusions in your Statement of Applicability. Implement selected controls.
-
Build your documentation (Clause 7). Create the required documented information: AI policy, risk assessment records, Statement of Applicability, operational procedures, and evidence of competence and awareness programs. If you have an existing ISO document management system, extend it.
-
Implement operational controls (Clause 8). Execute your risk treatment plans. Implement AI system lifecycle management processes — from requirements through deployment, monitoring, and retirement. Manage third-party AI components.
-
Conduct internal audit (Clause 9). Plan and execute at least one complete internal audit cycle covering all AIMS clauses and applicable Annex A controls. Document findings and corrective actions.
-
Complete management review (Clause 9). Present AIMS performance, audit results, risk status, and improvement opportunities to top management. Document management review outputs including decisions and resource allocation.
-
Engage a certification body (Stage 1 + Stage 2). Select an accredited certification body. Schedule your Stage 1 (documentation review) and Stage 2 (on-site assessment) audits. Address any nonconformities identified during audits and receive your certificate.
Frequently Asked Questions
What is ISO/IEC 42001? ISO/IEC 42001:2023 is the first international standard for AI management systems. It provides requirements for establishing, implementing, maintaining, and continually improving an Artificial Intelligence Management System (AIMS). Organizations can achieve third-party certification to demonstrate that their AI governance meets internationally recognized requirements.
How much does ISO 42001 certification cost? Certification audit costs range from $15,000 to $60,000+ depending on organization size, scope, and the certification body. Annual surveillance audits add $5,000–$20,000 per year. Internal preparation costs (gap analysis, control implementation, documentation, training) are additional and vary widely. Organizations with existing ISO management systems face lower incremental costs.
How does ISO 42001 relate to the EU AI Act? ISO 42001 is a candidate harmonized standard under the EU AI Act. When formally listed, certification may provide a presumption of conformity with certain EU AI Act requirements — particularly quality management and risk management obligations. This streamlines but does not replace the conformity assessment process for high-risk AI systems.
How is ISO 42001 different from NIST AI RMF? NIST AI RMF is a free, US-government-published voluntary framework with no certification mechanism — it provides practical risk management guidance. ISO 42001 is a certifiable international standard with a formal audit process — it provides the management system structure. Most organizations benefit from using both: NIST AI RMF for substantive risk management and ISO 42001 for certifiable governance.
Do I need ISO 42001 if I already have ISO 27001? ISO 27001 covers information security; ISO 42001 covers AI-specific governance. If your organization develops or deploys AI systems, ISO 27001 alone does not address AI risks like bias, fairness, explainability, and AI system lifecycle management. However, having ISO 27001 significantly accelerates ISO 42001 implementation because both share the Annex SL management system structure — expect 30–50% of your existing processes, documentation, and audit infrastructure to be reusable.
What is the Statement of Applicability? The Statement of Applicability (SoA) is a required document that lists all Annex A controls, states whether each is applicable, and justifies any exclusions. It functions as the bridge between your risk assessment outcomes and your implemented controls — auditors use it as a primary reference during certification audits.
Get weekly regulation updates, enforcement news, and compliance deadlines — free.