Back to the MIT repository
1. Discrimination & Toxicity3 - Other

Stereotyping

Stereotyping - Derogatory or otherwise harmful stereotyping or homogenisation of individuals, groups, societies or cultures due to the mis-representation, over-representation, under-representation, or non- representation of specific identities, groups, or perspectives.

Source: MIT AI Risk Repositorymit974

ENTITY

2 - AI

INTENT

3 - Other

TIMING

3 - Other

Risk ID

mit974

Domain lineage

1. Discrimination & Toxicity

156 mapped risks

1.0 > Discrimination & Toxicity

Mitigation strategy

1. Prioritize **Diverse and Representative Data Curation** and Validation: Systematically audit, augment, and refine training datasets to ensure equitable representation across all relevant demographic and social dimensions. This proactively mitigates the introduction or amplification of societal stereotypes by correcting imbalances, under-representation, or non-representation in the foundational data layer. 2. Implement **Comprehensive AI Governance and Continuous Monitoring**: Establish robust AI governance frameworks and ethical oversight boards that mandate rigorous, continuous auditing of models post-deployment. This must include the application of established fairness metrics and real-time bias detection mechanisms to identify and remediate stereotypic outputs or discriminatory effects before they escalate. 3. Enhance **Development Team Diversity and Bias-Aware Design**: Integrate cognitive diversity, including social scientists and domain experts, into the AI development lifecycle from the initial problem-framing stage. This ensures that the chosen objective function, target metrics, and model architecture are scrutinized for potential systemic bias (e.g., proxy variables reflecting existing discrimination) and that explicitly fair, bias-aware algorithms and user feedback mechanisms are employed.