Skip to main content
Regulome
Search regulations…⌘K
For providersFree Checker
The Ledger · Wednesday, 18 March 2026Issue № 31All issues →

AI Compliance Hub · newsroom

Compliance Guides · 6 min read

The Human-in-the-Loop Test Under California’s ADMT Rules

California’s ADMT rules require meaningful human oversight for certain automated decisions. Here’s what that actually means in practice and how to build a compliant human review process.

The Human-in-the-Loop Test Under California’s ADMT Rules
Compliance GuidesIllustration · AI Compliance Hub

California’s ADMT rules don’t just regulate whether AI is used — they regulate how it’s used in relation to human decision-makers. For businesses using ADMT for significant decisions, building a meaningful human oversight process isn’t optional. Here’s what “meaningful” actually means.


Why Human Oversight Matters Under ADMT

The ADMT rules are premised on the idea that automated systems can make errors, perpetuate biases, and produce unfair outcomes — and that human oversight is the check on these risks. This isn’t just a procedural requirement. The CPPA and courts will look at whether human review is real, not rubber-stamping.

The key concept is “meaningful human review”: a human who can actually understand, question, and override the ADMT output — not a human who clicks “approve” on AI decisions without substantive review.


The “Meaningful” Standard

California’s rules don’t define “meaningful” precisely, but guidance and the regulatory record indicate that meaningful human review means:

1. The reviewer has access to the information the ADMT used

A human reviewer must be able to see what inputs the AI used, not just what decision it recommended. If the AI denied a loan application, the reviewer needs to see the applicant’s file, the factors the AI weighted, and the reasoning.

2. The reviewer has authority to override

The human must have actual authority to reverse, modify, or escalate the AI’s recommendation. A process where overrides require 3 levels of approval before they can happen is not meaningful oversight.

3. The reviewer has adequate time

A human reviewing 200 AI decisions per hour isn’t providing meaningful review. The time allocated must be sufficient for substantive consideration.

4. The reviewer is trained

The reviewer must understand how the AI system works at a level sufficient to identify when it’s likely making an error.

5. Override decisions are tracked

Meaningful human oversight requires feedback loops. If the human overrides the AI, that override should be tracked and used to improve the system.


What Doesn’t Count as Meaningful Human Review

The approval stamp. A process where a human receives an AI recommendation and approves it without independent analysis is not human oversight — it’s human delegation.

Post-hoc review only. A process where humans can appeal a decision after harm has occurred, but no human reviews decisions before they’re implemented, provides inadequate protection.

Untrained reviewers. A customer service rep with no knowledge of how a credit model works, given 30 seconds to review a loan denial, is not providing meaningful oversight.

Reviewers without authority. If the reviewer can flag a concern but cannot actually change the outcome without escalating through multiple levels, the effective decision is still algorithmic.


Building a Compliant Human Oversight Process

Step 1: Identify Which ADMT Decisions Need Human Oversight

Not every AI decision needs the same level of human review. Prioritize by impact:

  • High-impact decisions (employment, credit, housing, healthcare): require substantive human review before the decision is implemented
  • Medium-impact decisions: may allow batch review with AI flagging anomalies for closer review
  • Low-impact decisions: monitoring and periodic audits may suffice

Step 2: Design the Review Interface

The review interface must present:

  • The consumer’s complete relevant information
  • The ADMT’s recommendation and the factors that drove it
  • Historical context (similar cases, model accuracy rates)
  • A clear mechanism to approve, override, or escalate

Step 3: Define Override Authority

Document clearly:

  • Who can override ADMT decisions
  • What information is needed to support an override
  • How overrides are recorded
  • What happens to override data (used to improve the model? Reviewed by management?)

Step 4: Set Time Standards

Define minimum review times for different decision categories. For high-impact decisions, meaningful review can’t be done in seconds.

Step 5: Train Reviewers

Training must cover:

  • How the AI system works and what it’s designed to optimize
  • Common AI failure modes (bias, distribution shift, edge cases)
  • How to read the AI’s output and identify red flags
  • Override mechanics and documentation requirements

Step 6: Audit the Oversight Process

Conduct periodic audits of the human oversight process:

  • What percentage of reviews result in overrides?
  • Are there patterns in where overrides occur?
  • Are reviewers spending adequate time on reviews?
  • Are override decisions being tracked and fed back to model improvement?

The Feedback Loop Requirement

California’s ADMT risk assessment requirements include evaluating whether the ADMT is working as intended and whether there are better alternatives. A meaningful human oversight process generates data that should feed into this evaluation: override rates, override patterns, and reviewer assessments of model quality.

Build this feedback loop from the start. Human oversight that doesn’t generate improvement data is oversight for its own sake, not oversight that actually reduces risk.

Tagged regulations
CCPA ADMTHuman OversightCaliforniaAutomated Decisions
AI Compliance Hub editors
The editorial desk covers AI and cyber regulation across the US, EU, and UK. Tips? editors@aicompliancehub.com
Not legal advice

This article is for informational purposes only and does not constitute legal advice. Always consult qualified counsel before making compliance decisions. Try the free compliance checker →

← Back to The Ledger

Keep the Ledger coming.

A weekly edition of new regulations, enforcement actions, and compliance deadlines — delivered every Friday. Free forever. No tracking pixels.

Subscribe free →

Read by 4,000+ compliance teams · Cancel any time