Back to the MIT repository
1. Discrimination & Toxicity2 - Post-deployment

Stereotyping

Derogatory or otherwise harmful stereotyping or homogenisation of individuals, groups, societies or cultures due to the mis-representation, over-representation, under-representation, or non-representation of specific identities, groups or perspectives

Source: MIT AI Risk Repositorymit1356

ENTITY

2 - AI

INTENT

2 - Unintentional

TIMING

2 - Post-deployment

Risk ID

mit1356

Domain lineage

1. Discrimination & Toxicity

156 mapped risks

1.1 > Unfair discrimination and misrepresentation

Mitigation strategy

1. Continuous real-time monitoring of system outputs using fairness metrics (e.g., demographic parity, equalized odds) and implementing post-processing algorithmic adjustments (e.g., output recalibration) to mitigate observed stereotyping before final user delivery. 2. Integrate the AI system into an established AI Governance Framework that mandates regular, independent bias audits and incorporates "Human-in-the-Loop" mechanisms at critical decision points to override or contextualize stereotyping-prone outputs. 3. Conduct a comprehensive retrospective audit of the training datasets to identify and address under-representation, mis-representation, and embedded societal biases (e.g., linguistic or cultural biases), followed by re-training the model using diverse and representative data augmented with fairness constraints (In-processing techniques).