Back to the MIT repository
1. Discrimination & Toxicity2 - Post-deployment

Quality-of-Service Harms

These harms occur when algorithmic systems disproportionately underperform for certain groups of people along social categories of difference such as disability, ethnicity, gender identity, and race.

Source: MIT AI Risk Repositorymit143

ENTITY

2 - AI

INTENT

2 - Unintentional

TIMING

2 - Post-deployment

Risk ID

mit143

Domain lineage

1. Discrimination & Toxicity

156 mapped risks

1.3 > Unequal performance across groups

Mitigation strategy

1. Implement Post-Processing Bias Mitigation: Apply post-deployment methods, such as threshold adjustment or reject option classification, to the algorithmic system's outputs to directly minimize disparities in performance metrics, like Equal Opportunity Difference, across the affected social categories of difference. 2. Establish Continuous Performance Monitoring and Auditing: Mandate robust governance structures to continuously audit and monitor system performance in the live environment for evidence of disparate impact, specifically focusing on fine-grained and intersectional subgroup analyses to preemptively detect unequal outcomes. 3. Prioritize Remedial Data Rebalancing and Retraining: Based on identified performance disparities, conduct targeted data rebalancing, such as protected-category oversampling, and employ bias-aware retraining strategies to improve model generalization and predictive accuracy for the disproportionately affected populations.