What to Expect When You Request an AI Bias Audit
A practical walkthrough of the RFQ process for an AI bias audit: what auditors assess, typical timelines and costs, and the right questions to ask.

If you use AI in hiring, credit, healthcare, or other regulated domains, there's a good chance you need a bias audit. New York City's Local Law 144 mandates one for AI hiring tools. The Colorado AI Act requires deployers to demonstrate bias testing. Similar requirements are spreading to Virginia, Texas, and other states.
But if you've never procured a bias audit before, the process can be opaque. Here's what to expect — from the initial request for quote (RFQ) through receiving your results.
What Is an AI Bias Audit?
An AI bias audit (also called an algorithmic bias audit or impact analysis) is an independent review of an AI system's outcomes across demographic groups. The core goal is identifying whether the AI produces disparate impact — outcomes that are significantly worse for some demographic groups than others.
Under NYC LL 144, a bias audit specifically measures:
- Selection rates by sex, race/ethnicity, and intersectional categories
- Impact ratios comparing selection rates of each group to the highest-rate group
- Whether any group's selection rate falls below 80% of the highest group (the "4/5ths rule")
For Colorado AI Act compliance, impact assessments are broader — they must document algorithmic discrimination risks, evaluation methodology, and mitigation measures.
Step 1: Assembling Your RFQ
When you reach out to bias auditors, you'll need to provide:
About your AI system:
- What does it do? (e.g., "resumes screening tool that ranks candidates 1-100")
- What vendor provides it? Do you have an API or batch export?
- What outputs does it produce? (scores, decisions, recommendations)
- What domains/roles is it used for?
About your data:
- How many decisions has the system made in the past 12 months?
- Do you have demographic data on subjects? If not, is it collectible?
- What data formats are available (CSV, API, database export)?
Scope and jurisdiction:
- Which law(s) must the audit satisfy? (NYC LL 144, Colorado AI Act, etc.)
- Is this the first audit or a renewal?
Step 2: What Auditors Will Quote You On
After reviewing your RFQ, auditors will scope based on:
- Data complexity: More decision records = more analysis = higher cost
- Demographic data availability: If you don't have direct demographic data, the auditor will use proxy methods (like BISG — Bayesian Improved Surname Geocoding), which adds analytical work
- Number of AI tools: Each tool is typically a separate audit engagement
- Regulatory scope: NYC LL 144 audits are standardized and narrower; Colorado AI Act impact assessments are broader
- Timeline: Rush engagements cost more
Typical cost ranges:
- Simple NYC LL 144 audit (one tool, good data): $3,000–$8,000
- Complex audit (multiple tools, proxy demographics, Colorado + NYC scope): $10,000–$25,000
- Enterprise multi-tool program: $25,000–$75,000+ annually
Step 3: The Audit Process
A typical bias audit engagement runs in three phases:
Phase 1: Data Collection (1–3 weeks)
The auditor will request:
- Decision logs with timestamps and outcomes
- Any demographic data collected
- Documentation of the AI system's methodology from the vendor
You'll need to coordinate with your data team and, often, your AI vendor.
Phase 2: Analysis (2–4 weeks)
The auditor runs statistical analysis:
- Calculating selection/outcome rates by demographic group
- Computing impact ratios
- Testing for statistical significance
- Reviewing the AI system's methodology documentation
Phase 3: Results and Report (1 week)
You'll receive:
- A draft report for your review
- Required publication summary (for NYC LL 144)
- Recommendations if disparate impact is found
- A final signed report for your records
Total typical timeline: 4–8 weeks end-to-end
Key Questions to Ask Auditors
Before you engage, ask potential auditors:
- Have you audited this specific tool or vendor before? (Familiarity speeds things up)
- What methodology do you use when demographic data is unavailable?
- Will your report meet NYC DCWP's specific format requirements?
- What happens if we find significant disparate impact? (Do they help you remediate?)
- Do you offer annual renewal programs at a discount?
- Are you truly independent? (The auditor cannot be your AI vendor)
- Adjusting thresholds or weights
- Supplementing AI decisions with additional human review
- Requesting remediation from the AI vendor
- In some cases, replacing the tool
What to Do With the Results
If the audit finds no significant disparate impact: post the required summary, keep the report, and set your annual renewal calendar.
If the audit finds disparate impact: you have options — you are not automatically disqualified from using the tool. Common next steps include:
Do not post audit results that show disparate impact and then continue using the tool unchanged — that is the highest-risk outcome legally.
Finding a Qualified Auditor
Browse verified bias audit firms in our directory, filterable by jurisdiction and AI system type.
Related Regulations
Not sure which AI laws apply to your business?
Use our free compliance checker — answer 4 questions, get instant results.
Check My ComplianceNot legal advice. This article is for informational purposes only. Always consult a qualified attorney for compliance decisions.