Bias and discrimination
The decision process used by AI systems has the potential to present biased choices, either because it acts from criteria that will generate forms of bias or because it is based on the history of choices.
ENTITY
2 - AI
INTENT
2 - Unintentional
TIMING
2 - Post-deployment
Risk ID
mit126
Domain lineage
1. Discrimination & Toxicity
1.1 > Unfair discrimination and misrepresentation
Mitigation strategy
1. **Implement Continuous Bias Auditing and Monitoring:** Conduct rigorous, regular, post-deployment bias audits utilizing established fairness metrics (e.g., statistical parity, equalized odds, disparate impact) to quantify and detect discriminatory performance gaps and feature attribution disparities across all sensitive demographic subsets. 2. **Establish Mandatory Human-in-the-Loop Oversight:** Institute clear governance protocols requiring meaningful human review and intervention for decisions generated by the AI system, particularly those pertaining to high-stakes domains such as employment, lending, and justice, to intercept and correct outputs that reflect unfair discrimination. 3. **Deploy Explainable AI (XAI) Techniques:** Ensure full model transparency and interpretability by utilizing XAI methods to elucidate the factors and rationale behind the system's biased outcomes, thereby facilitating systematic accountability and the development of targeted algorithmic or data adjustments.