Skip to main content

Algorithmic Discrimination

When an automated system treats people differently based on protected characteristics — such as race, gender, age, or disability — resulting in unfair or unlawful outcomes.

Also known as: AI discrimination, automated discrimination, discriminatory algorithm

Detailed Definition

Algorithmic discrimination occurs when an AI or automated decision-making system produces outcomes that systematically disadvantage individuals based on protected characteristics, even if those characteristics are not explicitly used as inputs.

How It Happens

Algorithmic discrimination can emerge from:

  • Biased training data — historical data that reflects past discrimination gets encoded into the model
  • Proxy variables — features that correlate with protected characteristics (e.g., zip code as a proxy for race)
  • Feedback loops — systems trained on their own outputs can amplify initial biases over time
  • Measurement bias — labels or outcomes in training data that reflect human bias

Under the Colorado AI Act, deployers must use "reasonable care" to protect consumers from known risks of algorithmic discrimination. The law specifically requires impact assessments to identify and mitigate such risks.

Under NYC Local Law 144, employers using AI tools for hiring must conduct annual independent bias audits — a direct mechanism for detecting and disclosing algorithmic discrimination.