Back to the MIT repository
1. Discrimination & Toxicity2 - Post-deployment

Discrimination

More broadly, bad decisions or errors by AI tools could lead to discrimination or deeper inequality

Source: MIT AI Risk Repositorymit908

ENTITY

2 - AI

INTENT

2 - Unintentional

TIMING

2 - Post-deployment

Risk ID

mit908

Domain lineage

1. Discrimination & Toxicity

156 mapped risks

1.1 > Unfair discrimination and misrepresentation

Mitigation strategy

1. **Implement a Robust AI Governance and Ethical Framework** Establish clear organizational policies, accountability mechanisms, and ethical guidelines that mandate fairness and non-discrimination across the entire AI lifecycle. This includes the formation of a diverse oversight board with legal, technical, and sociological expertise to systematically review and mitigate discriminatory risk, aligning with forthcoming regulatory standards. 2. **Conduct Continuous Fairness Auditing and Bias Impact Assessments** Implement a socio-technical framework for the regular, quantitative measurement of model performance using multiple fairness metrics (e.g., statistical parity, equalized odds) across all protected and vulnerable demographic subgroups *post-deployment*. This process must include ongoing monitoring of model outputs and a formal feedback loop to trigger retraining or recalibration when bias drift is detected. 3. **Mandate Data Auditing and Preprocessing for Dataset Representativeness** Systematically audit all training datasets to detect and document historical and representation biases. Apply scientifically validated data preprocessing techniques (e.g., re-weighting, synthetic data generation, or balancing) to ensure that the data achieves adequate and equitable representation of the entire population the AI system is intended to serve.