Discrimination
The creation, perpetuation or exacerbation of inequalities and biases at a large-scale.
ENTITY
3 - Other
INTENT
3 - Other
TIMING
2 - Post-deployment
Risk ID
mit1035
Domain lineage
1. Discrimination & Toxicity
1.1 > Unfair discrimination and misrepresentation
Mitigation strategy
1. Prioritize Fairness-by-Design and Data Governance: Proactively integrate fairness principles into the AI system's entire lifecycle, commencing with the problem framing and hypothesis generation. This necessitates rigorous data governance to ensure all training, validation, and testing datasets are sufficiently representative, complete, and devoid of historical biases, augmented by methods like re-weighting or collecting more diverse samples. 2. Implement Robust Algorithmic Fairness and Transparency Controls: Systematically apply algorithmic fairness techniques and metrics (e.g., statistical parity, equalized odds) to quantify and mitigate disparate impact across sensitive attributes. Concurrently, deploy Explainable AI (XAI) methods, such as SHAP or LIME, to provide transparent and understandable decision-making logic, facilitating accountability and root-cause analysis of bias. 3. Establish Comprehensive Governance and Human Oversight Mechanisms: Define clear organizational governance frameworks, including a dedicated risk management system, to mandate accountability and ethical non-discrimination throughout deployment. Institute clear Human-in-the-Loop processes for consequential decisions, granting human operators the necessary capacity and authority to meaningfully review, override, and correct potentially discriminatory AI outputs.