Back to the MIT repository
1. Discrimination & Toxicity2 - Post-deployment

Unfairness and Discrimination

Social bias is an unfairly negative attitude towards a social group or individuals based on one-sided or inaccurate information, typically pertaining to widely disseminated negative stereotypes regarding gender, race, religion, etc.

Source: MIT AI Risk Repositorymit64

ENTITY

3 - Other

INTENT

3 - Other

TIMING

2 - Post-deployment

Risk ID

mit64

Domain lineage

1. Discrimination & Toxicity

156 mapped risks

1.1 > Unfair discrimination and misrepresentation

Mitigation strategy

1. Implement Continuous Post-Deployment AI Governance and Auditing: Establish and mandate an AI governance framework with clear accountability for fairness, focusing on continuous post-deployment monitoring of system outputs using established fairness metrics (e.g., disparate impact ratio, equalized odds) to detect and remediate emergent bias in real-world settings. 2. Systematic Data and Algorithmic Bias Remediation: Conduct rigorous, multi-modal auditing of training datasets to identify and mitigate historical and social biases embedded in the source data, followed by the application of algorithmic fairness constraints and fine-tuning techniques to minimize bias propagation during model development and updates. 3. Institutionalize Procedural Fairness and Cognitive Bias Interventions: Institute a policy requiring standardized decision-making processes and mandatory cognitive bias training for personnel involved in model operation or decisions influenced by the AI output, emphasizing techniques such as 'slowing down' and perspective-taking to interrupt automated stereotypic processing.