The NIST AI Risk Management Framework organizes AI governance into four core functions: GOVERN, MAP, MEASURE, and MANAGE. Understanding what each function actually requires — not in theory but in implementation — is the foundation of an effective AI governance program.
GOVERN: Build the Foundation
The GOVERN function establishes the organizational context, culture, and processes that make the other three functions possible. It’s the infrastructure layer.
What GOVERN requires:
Policies and procedures. Document how your organization makes decisions about AI development and deployment. This includes: who approves new AI use cases, what risk criteria trigger elevated review, and how AI governance decisions are escalated.
Roles and responsibilities. Assign clear AI governance roles. Who is the AI risk owner? Who conducts risk assessments? Who has authority to stop an AI deployment? These must be real roles held by real people, not just committee names.
Organizational culture. NIST is explicit that governance requires cultural buy-in, not just policy documents. Leaders must signal that AI risk management matters. Staff must have channels to raise concerns.
Supply chain and third-party AI. Your governance must extend to AI you procure from vendors, not just AI you build. Document how you assess and monitor third-party AI.
Practical GOVERN outputs:
- AI governance policy
- AI use case intake and approval process
- Risk escalation matrix
- Vendor AI assessment questionnaire
- RACI matrix for AI risk responsibilities
MAP: Understand the Context and Risk
MAP is the AI risk identification function. Before you can manage AI risk, you need to know what risks exist and in what context.
What MAP requires:
Categorize AI use cases. Not all AI systems carry the same risk. MAP requires you to develop a classification scheme — what makes an AI system high, medium, or low risk? Relevant factors: what decisions does it influence, who is affected, what are the consequences of errors?
Identify affected stakeholders. For each AI system, who could be helped or harmed? Customers, employees, third parties? MAP requires explicit stakeholder identification.
Document intended use. What is the AI system supposed to do? What inputs does it take, what outputs does it produce? This documentation is the foundation for risk assessment.
Identify failure modes. What happens when the AI is wrong? What happens when it’s right in a narrow sense but causes unintended harms? MAP requires this analysis.
Assess context factors. Relevant regulatory requirements, applicable legal frameworks, industry standards, and organizational risk tolerance all shape how you respond to identified risks.
Practical MAP outputs:
- AI inventory with risk classification
- Stakeholder analysis per AI system
- AI system datasheets (model cards)
- Risk register for each system
MEASURE: Evaluate and Test
MEASURE is the empirical function. It’s where you test whether your AI systems work as intended and identify harms in practice.
What MEASURE requires:
Define metrics. Before you can measure, you need to know what you’re measuring. Performance metrics (accuracy, recall, precision), fairness metrics (demographic parity, equalized odds), and risk metrics (frequency of errors, severity of errors, affected populations).
Evaluate bias and fairness. For AI systems that affect people, test whether the system performs consistently across different demographic groups. Document the methodology and results.
Red-team and adversarial testing. Deliberately attempt to break the AI system. What inputs cause failures? What adversarial manipulation is possible? What edge cases are unhandled?
Ongoing monitoring. MEASURE isn’t a one-time exercise. AI systems drift — the world changes and the model’s performance may degrade. Establish continuous monitoring with defined thresholds for escalation.
Third-party evaluation. For high-stakes AI systems, consider independent evaluation by parties without a stake in the system’s success.
Practical MEASURE outputs:
- Evaluation methodology documentation
- Bias and fairness test results
- Red-teaming reports
- Monitoring dashboards and alert thresholds
- Performance trend analysis
MANAGE: Respond and Improve
MANAGE is where analysis becomes action. It’s the response and remediation function.
What MANAGE requires:
Prioritize identified risks. From your MAP and MEASURE work, you have a list of risks. Prioritize by likelihood and severity. Not all risks need the same response.
Plan mitigations. For each prioritized risk, what are you doing about it? Technical mitigations (retraining, filtering, guardrails), procedural mitigations (human review requirements, usage restrictions), or governance responses (escalation, discontinuation).
Implement and track. Mitigation plans must be implemented and tracked to completion. This is where governance programs often fail — good analysis without follow-through.
Incident response. When something goes wrong (and it will), you need a documented incident response process for AI-related harms. Who gets notified? What triggers a pause in AI system use? How are affected parties informed?
Feedback loops. MANAGE feeds back into GOVERN, MAP, and MEASURE. Incidents inform updated policies. New risks trigger updated risk registers. The framework is a cycle, not a checklist.
Practical MANAGE outputs:
- Prioritized risk response plan
- Mitigation tracking register
- AI incident response procedure
- After-action review process for incidents
- Framework improvement log
The Cycle in Practice
The four functions are meant to work together as a continuous cycle:
GOVERN establishes the organizational foundation → MAP identifies specific risks → MEASURE evaluates whether those risks are materializing → MANAGE responds to what’s found → Back to GOVERN to update policies based on what was learned.
In a mature program, this cycle runs continuously, with ongoing MAP updates as new AI systems are deployed, continuous MEASURE monitoring for deployed systems, and regular MANAGE reviews to close out mitigations and respond to new findings.
This article is for informational purposes only and does not constitute legal advice. Always consult qualified counsel before making compliance decisions. Try the free compliance checker →
