Back to the MIT repository
1. Discrimination & Toxicity3 - Other

Fairness

Impartial and just treatment without favouritism or discrimination.

Source: MIT AI Risk Repositorymit641

ENTITY

3 - Other

INTENT

3 - Other

TIMING

3 - Other

Risk ID

mit641

Domain lineage

1. Discrimination & Toxicity

156 mapped risks

1.3 > Unequal performance across groups

Mitigation strategy

1. Establish and enforce structured, objective criteria and processes, utilizing calibration meetings to ensure consistency in evaluation and minimize subjective judgment across different groups. 2. Mandate the collection and use of diverse, representative data, implementing pre-processing techniques such as reweighting or resampling to mitigate systemic biases in the input information. 3. Integrate fairness-aware algorithms and continuous bias detection mechanisms post-deployment to monitor for unequal performance across identified groups and enable timely intervention or model retraining.