The Colorado AI Act (SB 24-205) requires deployers of high-risk AI systems to complete an impact assessment before deploying and on an ongoing basis. This is one of the Act’s most operationally significant requirements. Here’s how to do it.
Who Must Complete an Impact Assessment
Deployers of high-risk AI systems in Colorado must complete impact assessments. A deployer is any business that deploys an AI system in Colorado to make consequential decisions about Colorado residents.
Consequential decisions include decisions materially affecting: education enrollment, employment, financial services (credit, insurance), housing, healthcare, and government services.
You don’t have to build the AI yourself — if you’re using a third-party AI tool to make consequential decisions, you’re likely a deployer.
What the Impact Assessment Must Cover
The Act specifies the required contents. Your assessment must document:
1. Purpose and Intended Use
- What the AI system does
- The specific decisions it influences or makes
- The intended population affected
2. Benefits
- Articulate what the AI system is supposed to achieve
- Why AI was selected over other approaches
3. Risks of Algorithmic Discrimination
- Identify how the system could discriminate against protected classes
- Document what protected characteristics are relevant to the use case
- Assess whether the training data reflects historical biases
4. Measures Taken to Mitigate Discrimination
- Testing methodology used
- Bias metrics evaluated and results
- Safeguards implemented (technical and procedural)
- Ongoing monitoring approach
5. Data Governance
- What data is used by the system
- Data quality controls
- Data retention policies
6. Human Oversight
- How humans review, override, or audit AI decisions
- The process for consumers to contest AI decisions
- Staff training on AI oversight
7. Post-Deployment Monitoring
- How you will monitor for errors and bias after deployment
- Frequency of review
- Thresholds that would trigger remedial action
Step-by-Step Process
Step 1: Inventory Your AI Systems (Weeks 1-2)
List every AI system or tool your organization uses that could influence a consequential decision. Cast a wide net — include:
- HR and recruiting tools
- Credit and lending decisioning
- Customer risk scoring
- Document processing with AI
For each, determine: Is this a consequential decision? Does it affect Colorado residents?
Step 2: Identify High-Risk Systems (Week 2)
Apply the Colorado AI Act definition. A high-risk AI system is one that makes or substantially influences consequential decisions and poses a heightened risk of harm. Work with legal counsel to apply the definition to your specific systems.
Step 3: Collect Documentation from Vendors (Weeks 3-4)
If you use third-party AI tools, request:
- Documentation of training data sources
- Bias testing results
- Accuracy metrics across demographic groups
- Any existing impact assessments or bias audits
Many vendors will have this under NDA. Push for it — you need it to complete your own assessment.
Step 4: Conduct Internal Risk Analysis (Weeks 4-6)
For each high-risk system:
- Map the decision pipeline (what inputs go in, what decisions come out)
- Identify which protected characteristics could be impacted
- Review historical decision data for disparate outcomes
- Assess whether disparity is present and whether it’s justified
Step 5: Document Mitigations (Week 6)
For each identified risk:
- What have you done to reduce it?
- What ongoing controls are in place?
- What would trigger you to pause or stop using the system?
Step 6: Write the Impact Assessment (Week 7)
Compile everything into a documented assessment. It doesn’t have to be a specific format, but it must cover all the required elements. Keep it in writing — the AG can request it.
Step 7: Establish Ongoing Monitoring (Week 8+)
The assessment isn’t a one-time exercise. Set up:
- Quarterly reviews of decision outcomes
- Annual full reassessment
- Trigger-based reviews if you change the AI system or detect anomalies
Common Mistakes
Treating it as a paper exercise. The impact assessment must reflect real analysis, not just box-checking. The AG’s office will be looking for substance.
Ignoring third-party tools. “We didn’t build the AI” is not a defense. Deployers are responsible for assessing what they deploy.
Not documenting the human oversight process. The Act requires that consumers have a meaningful way to appeal AI decisions. If you don’t have a process, you need to build one.
Annual only. If you significantly update the AI system, the assessment should be updated, not just refreshed annually.
The NIST AI RMF Connection
Colorado explicitly recognizes NIST AI RMF alignment as a good-faith compliance indicator. If you structure your impact assessment using the NIST MEASURE and MANAGE functions, you’re building toward the statutory safe harbor and documenting compliance simultaneously.
Enforcement begins June 30, 2026. Start your assessments now — working through a thorough impact assessment for a complex system takes 6-8 weeks minimum.
This article is for informational purposes only and does not constitute legal advice. Always consult qualified counsel before making compliance decisions. Try the free compliance checker →
