What Is ISO 42001?
ISO/IEC 42001:2023 is the international standard for Artificial Intelligence Management Systems (AIMS). Published in December 2023 by the International Organization for Standardization, it defines the requirements for establishing, implementing, maintaining, and continually improving an AI management system within an organization.
Unlike sector-specific regulations such as the EU AI Act or NYC Local Law 144, ISO 42001 is a voluntary framework that organizations can adopt to demonstrate responsible AI governance. Certification is issued by accredited third-party certification bodies—the same bodies that issue ISO 9001 (quality) and ISO 27001 (information security) certificates.
The standard follows the familiar Plan-Do-Check-Act structure used across all modern ISO management system standards, which means organizations already certified under ISO 27001 or ISO 9001 will find significant overlap in the documentation and governance requirements.
Who Needs ISO 42001 Certification?
Certification is not legally mandated by any jurisdiction as of May 2026, but demand is rising fast for several reasons:
- EU AI Act procurement requirements. Public sector buyers in the EU are beginning to require ISO 42001 certification or equivalent as a condition of AI vendor contracts.
- Enterprise procurement. Large enterprises are adding ISO 42001 to supplier questionnaires alongside SOC 2 and ISO 27001.
- Demonstrating EU AI Act compliance. The European Commission is expected to recognize ISO 42001 as a harmonized standard, meaning certified organizations may benefit from a presumption of conformity with certain EU AI Act obligations.
- Insurance and liability. AI liability insurers are beginning to use certification status as an underwriting factor.
Organizations that develop, deploy, or operate AI systems—particularly high-risk or consequential AI—should treat ISO 42001 certification as a near-term compliance priority.
The Certification Process at a Glance
A typical ISO 42001 certification engagement runs 6 to 18 months depending on organizational size and AI system complexity. The process involves:
- Gap assessment against the standard's requirements
- AIMS implementation—policies, procedures, risk processes, documentation
- Internal audit to verify readiness
- Stage 1 audit (document review) by a certification body
- Stage 2 audit (on-site or remote evidence review)
- Certificate issuance (valid 3 years, with annual surveillance audits)
The checklist below maps directly to the standard's clause structure. Use it as a pre-audit readiness tool.
ISO 42001 Checklist: All 48 Requirements
Phase 1: Context of the Organization (Clause 4)
These requirements establish the boundaries and purpose of your AI management system.
- [ ] 4.1 Identified and documented internal and external issues relevant to the organization's AI activities
- [ ] 4.1 Assessed how those issues affect the AIMS's ability to achieve its intended outcomes
- [ ] 4.2 Identified all interested parties (customers, regulators, employees, affected communities) and their requirements related to AI
- [ ] 4.2 Determined which interested party requirements will be addressed through the AIMS
- [ ] 4.3 Defined the scope of the AIMS, including which AI systems, processes, and organizational units are covered
- [ ] 4.3 Documented the scope statement and made it available as documented information
- [ ] 4.4 Established, implemented, maintained, and planned for continual improvement of the AIMS in accordance with the standard's requirements
Common gap: Organizations frequently define scope too narrowly, excluding AI systems embedded in third-party tools or acquired through M&A. Auditors will probe scope boundaries aggressively.
Phase 2: Leadership (Clause 5)
Leadership commitment is not ceremonial—auditors look for evidence that top management actively governs AI risk.
- [ ] 5.1 Top management has demonstrated leadership and commitment by establishing AI policy, ensuring AIMS integration with business processes, and directing resources
- [ ] 5.1 Top management has promoted a culture of responsible AI use
- [ ] 5.2 An AI policy has been established that is appropriate to the organization's purpose, includes a commitment to satisfying applicable requirements, and commits to continual improvement
- [ ] 5.2 The AI policy is documented, communicated internally, and available to interested parties as appropriate
- [ ] 5.3 Organizational roles, responsibilities, and authorities relevant to the AIMS are assigned and communicated
- [ ] 5.3 A responsible individual or function has been assigned accountability for AIMS performance reporting to top management
Common gap: AI policy documents that exist on paper but have no evidence of board or executive review. Auditors will ask for meeting minutes, approval records, or equivalent evidence.
Phase 3: Planning (Clause 6)
This is where AI risk management and objective-setting live—often the most substantive clause for AI-intensive organizations.
- [ ] 6.1.1 Established a process to identify risks and opportunities related to the AIMS
- [ ] 6.1.2 Conducted an AI risk assessment that identifies risks to individuals and society from AI systems
- [ ] 6.1.2 AI risk assessment criteria (likelihood, impact, acceptable risk thresholds) are defined and documented
- [ ] 6.1.2 AI risk assessment results are documented and retained
- [ ] 6.1.3 An AI risk treatment plan is documented, identifying selected controls and justifications
- [ ] 6.1.3 Risk owners have accepted residual risks following treatment
- [ ] 6.2 AI objectives are established at relevant functions and levels, are measurable, and are monitored
- [ ] 6.2 Plans to achieve AI objectives include responsible parties, timelines, and evaluation methods
- [ ] 6.3 Changes to the AIMS are carried out in a planned manner
Common gap: Risk assessments that cover cybersecurity risk but omit AI-specific harms—bias, fairness failures, autonomy impacts, and societal effects. ISO 42001 explicitly requires harm assessment beyond traditional IT risk categories.
Phase 4: Support (Clause 7)
Support requirements address the resources, competencies, and infrastructure needed to run the AIMS.
- [ ] 7.1 Resources required for AIMS establishment, implementation, maintenance, and improvement are determined and provided
- [ ] 7.2 Persons doing work under the AIMS have the necessary competence (education, training, or experience)
- [ ] 7.2 Competence requirements are documented; training or other actions taken to acquire competence are recorded
- [ ] 7.3 Persons are aware of the AI policy, their contribution to AIMS effectiveness, and the implications of nonconformity
- [ ] 7.4 Internal and external communications relevant to the AIMS are planned (what, when, to whom, how)
- [ ] 7.5.1 Documented information required by the standard exists and is controlled
- [ ] 7.5.2 New documented information is identified with appropriate metadata (title, author, date, version)
- [ ] 7.5.3 Documented information is controlled for distribution, access, retrieval, storage, version control, retention, and disposition
Common gap: Competence records. Many organizations train staff on AI ethics but cannot produce evidence of training completion, assessment results, or competency verification for auditors.
Phase 5: Operations (Clause 8)
The operational clauses are the heart of ISO 42001—they define how AI systems are actually governed throughout their lifecycle.
- [ ] 8.1 Operational processes needed to meet AIMS requirements are planned, implemented, controlled, and reviewed
- [ ] 8.2 An AI impact assessment process is established and documented
- [ ] 8.2 AI impact assessments are conducted before deploying AI systems and when significant changes occur
- [ ] 8.2 Impact assessment results and treatment decisions are retained as documented information
- [ ] 8.3 An AI system lifecycle management process covers design, development, testing, deployment, monitoring, and decommissioning
- [ ] 8.3 AI system objectives, intended use, and foreseeable misuse are documented for each in-scope AI system
- [ ] 8.4 Data management processes address data quality, data provenance, and data governance requirements for AI training and inference data
- [ ] 8.4 Processes exist to identify and address data bias before and during AI system deployment
- [ ] 8.5 Processes address the use of AI by the organization (as a deployer) in addition to AI development (as a developer), where both apply
- [ ] 8.6 Third-party and supply chain AI risks are assessed; contracts or agreements address AI governance requirements with suppliers
- [ ] 8.7 Documented criteria exist for responsible disclosure of AI system information to affected parties
Common gap: Organizations strong on development governance but weak on third-party AI use. If your organization uses AI tools built by vendors—hiring tools, fraud detection, content moderation—those systems must be within scope and subject to impact assessment.
Phase 6: Performance Evaluation (Clause 9)
Certification is not a one-time event. Auditors will test whether monitoring and measurement are genuinely operational.
- [ ] 9.1 Methods for monitoring, measurement, analysis, and evaluation of AIMS performance are defined
- [ ] 9.1 Monitoring and measurement results are documented and retained
- [ ] 9.1 AI system performance against defined objectives is evaluated at planned intervals
- [ ] 9.2 Internal audits are conducted at planned intervals to verify AIMS conformance and effective implementation
- [ ] 9.2 An internal audit program exists with defined scope, frequency, methods, and responsibilities
- [ ] 9.2 Internal audit results are reported to relevant management and retained as documented information
- [ ] 9.3 Management reviews of the AIMS are conducted at planned intervals
- [ ] 9.3 Management review inputs include audit results, customer feedback, AI risk status, objective performance, and continual improvement opportunities
- [ ] 9.3 Management review outputs (decisions, actions) are retained as documented information
Common gap: Internal audit programs that exist for ISO 27001 but have not been extended to cover AI-specific requirements. A single combined audit program is efficient but must explicitly address AI risks and controls.
Phase 7: Improvement (Clause 10)
- [ ] 10.1 Opportunities for improvement are identified and acted upon
- [ ] 10.2 Nonconformities are identified, documented, root-caused, corrected, and evaluated for recurrence
- [ ] 10.2 Corrective actions are appropriate to the effects of the nonconformities encountered
- [ ] 10.2 Results of corrective actions are retained as documented information
- [ ] 10.3 The organization continually improves the suitability, adequacy, and effectiveness of the AIMS
Annex A Controls: What Else Do Auditors Check?
ISO 42001 includes Annex A, which lists 38 additional controls (similar to ISO 27001's Annex A for information security). Organizations must document a Statement of Applicability declaring which Annex A controls apply and why any are excluded.
Key Annex A control areas include:
- AI system transparency and explainability
- Human oversight mechanisms
- Incident management for AI-related failures
- AI-specific data governance
- Stakeholder engagement and communication
How Long Does ISO 42001 Certification Take?
| Organization Size | Estimated Timeline |
|---|---|
| Small (1–50 employees, 1–3 AI systems) | 4–6 months |
| Mid-size (50–500 employees) | 6–12 months |
| Large enterprise | 12–18 months |
The biggest time variable is documentation maturity. Organizations with existing ISO 27001 programs can typically achieve ISO 42001 certification in 4–8 months due to shared infrastructure. Organizations starting from scratch should budget closer to 12 months.
Five Gaps That Fail Certification Audits
- No AI system inventory. You cannot govern what you have not catalogued. Auditors start by asking for the complete list of AI systems in scope.
- Impact assessments not conducted before deployment. Organizations that deployed AI systems before the AIMS was established must retroactively document assessments.
- Third-party AI systems excluded from scope. Vendor-supplied AI tools used in consequential decisions must be assessed.
- Competence records absent. Training completion alone is insufficient without documented competency verification.
- Top management cannot articulate AI governance. Auditors frequently interview executives. If leadership cannot speak to AI risk and policy, the certification body will issue major nonconformities.
Use Regulome.io to Track Your ISO 42001 Readiness
Working through this ISO 42001 checklist manually across distributed teams creates version control and accountability problems. Regulome.io maps ISO 42001 requirements to your specific AI systems, tracks completion status by owner, surfaces gaps before your Stage 1 audit, and connects your AIMS obligations to parallel requirements under the EU AI Act and Colorado AI Act. Start your compliance inventory at Regulome.io and enter your first ISO 42001 certification cycle with a clear, auditable readiness picture.
This article is for informational purposes only and does not constitute legal advice. Always consult qualified counsel before making compliance decisions. Try the free compliance checker →
Keep the Ledger coming.
A weekly edition of new regulations, enforcement actions, and compliance deadlines — delivered every Friday. Free forever. No tracking pixels.