Colorado’s AI Act (SB 24-205) includes a provision that’s gotten significant attention: deployers and developers who comply with a technical standard or framework recognized by the AG as meeting the Act’s requirements may receive safe harbor protection. The NIST AI Risk Management Framework has been identified as one such recognized framework.
But what does it actually mean to “comply with NIST AI RMF” for Colorado purposes? The answer requires unpacking both the Act and the framework.
What the Safe Harbor Provision Says
The Act allows the AG to recognize standards and frameworks that meet the Act’s requirements. Businesses that comply with such a recognized standard:
- Are presumed to have met the Act’s requirements covered by that standard
- Have reduced exposure to enforcement penalties for covered violations
- Can use documented alignment as evidence of good-faith compliance
The safe harbor is not absolute immunity. It doesn’t protect against all violations — only those covered by the recognized standard. And “complying with NIST AI RMF” requires actual implementation, not just claiming the framework.
What NIST AI RMF Alignment Requires for Colorado
The AG’s guidance maps Colorado AI Act requirements to NIST AI RMF functions. Here’s how the mapping works:
Impact Assessment → NIST MAP + MEASURE
Colorado requires deployers to conduct impact assessments covering risk of algorithmic discrimination, mitigation measures, data governance, and human oversight.
NIST MAP and MEASURE cover this directly:
- MAP: Identify the context, affected stakeholders, and potential harms
- MEASURE: Test whether the system performs fairly across demographic groups; document methodology and results
To establish NIST alignment on this requirement, you need: documented MAP analysis for each high-risk AI system, documented MEASURE results with bias/fairness testing, and clear connection between the two.
Human Oversight → NIST GOVERN + MANAGE
Colorado requires meaningful human oversight and appeals processes.
NIST GOVERN and MANAGE cover this through:
- GOVERN: Policy requiring human review for high-risk decisions
- MANAGE: Incident response procedures; escalation processes
Documentation: Written policy on human oversight + evidence it’s actually implemented (audit logs, training records, appeals handling records).
Ongoing Monitoring → NIST MEASURE (continuous)
Colorado requires ongoing monitoring after deployment.
NIST MEASURE includes continuous monitoring requirements. Documentation: Monitoring dashboard, defined alert thresholds, records of monitoring reviews.
What “Documentation” Means in Practice
The safe harbor only works if you can prove it. The AG can request your documentation. What you need:
AI Inventory: Every high-risk AI system identified, with NIST risk classification
System Profiles (model cards or AI datasheets): For each system: what it does, what inputs it uses, what decisions it affects, who is affected
Risk Assessments: MAP analysis + MEASURE results for each high-risk system
Mitigation Records: What you did in response to identified risks, tracked to completion
Monitoring Records: Ongoing performance data, including any alerts triggered and how they were resolved
Human Oversight Evidence: Policy + evidence of actual operation (appeals logs, override records, training completions)
Common Mistakes That Undermine the Safe Harbor
Adopting NIST language without substance. Saying you follow NIST AI RMF without actually conducting MAP analysis, MEASURE testing, or MANAGE response does not create safe harbor protection.
Point-in-time compliance only. NIST AI RMF is a cycle. A one-time impact assessment that’s never updated doesn’t satisfy ongoing Colorado monitoring requirements.
No documentation trail. Safe harbor requires being able to demonstrate compliance. Undocumented good practices don’t help in an enforcement action.
Treating vendor documentation as sufficient. Your vendor’s NIST alignment doesn’t substitute for your own. As a deployer, you need your own documentation of how you’ve assessed and monitored what you’ve deployed.
Getting the Documentation Right
The AG’s office has signaled that documented good-faith compliance efforts will be weighed in enforcement decisions. An imperfect but documented program is better than an undocumented perfect program.
Minimum documentation package for Colorado safe harbor:
- AI inventory with risk classification rationale
- Impact assessment for each high-risk system (MAP + MEASURE)
- Mitigation plan with completion status
- Human oversight policy + operation evidence
- Monitoring reports (at least quarterly)
- Reference to NIST AI RMF functions in each document
This documentation package is also useful for the Virginia HB 2094 compliance assessment (Virginia's requirements parallel Colorado’s) and provides strong foundation for EU AI Act conformity assessment.
This article is for informational purposes only and does not constitute legal advice. Always consult qualified counsel before making compliance decisions. Try the free compliance checker →
