Back to the MIT repository
1. Discrimination & Toxicity2 - Post-deployment

Unfair representation

Mis-, under-, or over-representing certain identities, groups, or perspectives or failing to represent them at all (e.g. via homogenisation, stereotypes)

Source: MIT AI Risk Repositorymit259

ENTITY

2 - AI

INTENT

2 - Unintentional

TIMING

2 - Post-deployment

Risk ID

mit259

Domain lineage

1. Discrimination & Toxicity

156 mapped risks

1.1 > Unfair discrimination and misrepresentation

Mitigation strategy

1. Implement rigorous, multi-attribute data auditing and ethical curation services to ensure training datasets are verifiably representative and balanced across all relevant demographic and sensitive attributes, utilizing cognizant sampling techniques to prevent underrepresentation bias. 2. Employ fairness-by-design principles by integrating fairness-aware algorithms, such as adversarial debiasing or constraint-based optimization, during model training to explicitly minimize differential performance and outcomes for identified subgroups. 3. Establish a Continuous Fairness Monitoring framework and Human-in-the-Loop oversight for all deployment stages, mandating regular audits using established fairness metrics (e.g., disparate impact ratios) to detect and correct emerging representational biases in real-time.

ADDITIONAL EVIDENCE

Example: Generating more images of female-looking individuals when prompted with the word “nurse” (Mishkin et al., 2022)*