Back to the MIT repository
1. Discrimination & Toxicity2 - Post-deployment

Service/benefit loss

degraded or total loss of benefits of using algorithmic systems with inequitable system performance based on identity

Source: MIT AI Risk Repositorymit146

ENTITY

2 - AI

INTENT

2 - Unintentional

TIMING

2 - Post-deployment

Risk ID

mit146

Domain lineage

1. Discrimination & Toxicity

156 mapped risks

1.3 > Unequal performance across groups

Mitigation strategy

1. Prioritize input quality and representation: Perform comprehensive auditing of training and validation datasets to identify and address deficiencies in demographic representation and annotation quality for all identity subgroups. Actively collect or synthetically generate high-quality, representative data to ensure equitable coverage across the sensitive attributes identified as contributing to unequal performance. 2. Mandate equitable performance evaluation and modeling: Establish and rigorously enforce the use of disaggregated performance metrics (e.g., accuracy, error rates) across all identity subgroups during model development and validation. Apply fairness-aware machine learning techniques to actively mitigate documented disparities, aiming for performance parity or constrained inequity. 3. Implement robust post-deployment monitoring and feedback loops: Deploy continuous, real-time monitoring systems to track system performance disaggregated by identity groups in the production environment. Establish a clear, accessible, and responsive mechanism for receiving and prioritizing user feedback related to service or benefit loss to inform urgent model updates.

ADDITIONAL EVIDENCE

It conveyed the opposite message than what I had originally intended, and cost somebody else a lot (of time)