Skip to main content
Compliance Guides9 min read

NIST AI RMF Explained: A Compliance Team's Field Guide

What the NIST AI Risk Management Framework is, how its four core functions work, and how it maps to the EU AI Act and Colorado requirements.

NIST AI RMF Explained: A Compliance Team's Field Guide

If you work in AI compliance, you've probably seen "NIST AI RMF" referenced in vendor documentation, regulatory guidance, and board presentations. But what does it actually require, and how does it relate to the regulations you must comply with?

This field guide explains the NIST AI Risk Management Framework (AI RMF 1.0) in practical terms for compliance teams.

What Is the NIST AI RMF?

The NIST AI Risk Management Framework (NIST AI 100-1) is a voluntary framework published by the US National Institute of Standards and Technology in January 2023. Unlike laws like the EU AI Act or Colorado AI Act, it doesn't carry legal penalties — but it has become the de facto standard reference for organizations building AI governance programs in the US.

Why it matters for compliance teams:

  • The EU AI Act explicitly references NIST-compatible standards as one path to demonstrating conformity
  • Colorado's AI Act guidance points to NIST AI RMF as a best-practice framework for impact assessments and risk management programs
  • Many AI vendors use NIST AI RMF as their governance reference — understanding it helps you evaluate vendor documentation
  • Board-level AI governance expectations are increasingly framed around NIST AI RMF

The Four Core Functions

The NIST AI RMF organizes AI risk management into four functions, often written as GOVERN → MAP → MEASURE → MANAGE. Here's what each means in practice.

GOVERN

Governance is the foundational function — it sets up the organizational structures, policies, and culture that make AI risk management possible.

In practice, GOVERN means:

  • Assigning clear accountability for AI systems (who owns each AI tool?)
  • Creating an AI policy that defines acceptable use, prohibited use, and governance processes
  • Establishing how new AI tools are evaluated before adoption
  • Ensuring leadership understands AI risk and has visibility into high-risk systems
  • Building a cross-functional AI governance team (legal, engineering, compliance, HR)

GOVERN maps to:

  • Colorado AI Act: governance programs and accountability structures
  • EU AI Act: Article 17 quality management systems; human oversight requirements
  • NYC LL 144: employer accountability for AEDT use

MAP

Mapping is about identifying and contextualizing AI risk — you can't manage what you haven't found.

In practice, MAP means:

  • Maintaining an inventory of all AI systems your organization uses or builds
  • Classifying systems by risk level (high / medium / low)
  • Documenting the intended purpose, affected populations, and potential harms of each system
  • Understanding how each AI system interacts with existing processes and decisions
  • Identifying which regulations apply to each system

MAP maps to:

  • Colorado AI Act: AI inventory and impact assessment scope identification
  • EU AI Act: Annex III high-risk classification; provider technical documentation
  • NYC LL 144: Identifying AEDTs and audit scope

MEASURE

Measuring means rigorously evaluating AI systems for the risks you've identified.

In practice, MEASURE means:

  • Conducting bias testing and disparate impact analysis
  • Running accuracy and performance evaluations across population subgroups
  • Testing for robustness to adversarial inputs or distribution shift
  • Assessing explainability — can the system's outputs be explained to affected individuals?
  • Documenting evaluation results and their limitations

MEASURE maps to:

  • Colorado AI Act: Impact assessment requirement to document bias evaluation
  • EU AI Act: Conformity assessment; technical documentation on testing and performance
  • NYC LL 144: Bias audit statistical analysis and impact ratios

MANAGE

Managing means acting on what you've measured — implementing controls, monitoring, and incident response.

In practice, MANAGE means:

  • Implementing mitigations for identified risks (human review, thresholds, access controls)
  • Setting up ongoing monitoring for model drift and disparate outcomes in production
  • Running periodic reviews and updating impact assessments when systems change
  • Documenting how consumer complaints or harm reports are handled
  • Maintaining audit trails

MANAGE maps to:

  • Colorado AI Act: Annual updates to impact assessments; consumer appeal processes; monitoring
  • EU AI Act: Post-market monitoring; incident reporting; human oversight requirements
  • NYC LL 144: Annual audit renewal; public posting of updated results

How NIST AI RMF Maps to Key Regulations

NIST FunctionColorado AI ActEU AI ActNYC LL 144
GOVERNRisk management programQuality management systemEmployer accountability
MAPAI inventory + impact assessment scopeAnnex I/III classificationAEDT identification
MEASUREBias evaluation in impact assessmentConformity assessment + testingThird-party bias audit
MANAGEMonitoring + consumer rightsPost-market monitoring + incident reportingAnnual audit renewal

Is NIST AI RMF Compliance Enough?

No. NIST AI RMF is a framework, not a legal standard. Following it does not mean you're compliant with the Colorado AI Act, EU AI Act, or NYC LL 144.

However, NIST AI RMF provides an excellent organizational structure for your compliance program. If you build your governance program around GOVERN/MAP/MEASURE/MANAGE, you'll have the right building blocks to satisfy each regulation's specific requirements — you'll just need to layer in the law-specific elements (exact assessment formats, required consumer disclosures, audit standards, etc.).

Getting Started

For most compliance teams, the practical starting point is:

  1. GOVERN first — assign AI ownership and establish a simple policy
  2. MAP second — inventory your AI systems and classify risk
  3. MEASURE next — start impact assessments for high-risk systems
  4. MANAGE last — implement monitoring and controls
  5. Don't try to implement everything at once. A credible, documented partial program is better than an aspirational undocumented one.

    Resources

NIST AI RMFFrameworksRisk Management

Not sure which AI laws apply to your business?

Use our free compliance checker — answer 4 questions, get instant results.

Check My Compliance

Not legal advice. This article is for informational purposes only. Always consult a qualified attorney for compliance decisions.