Back to the MIT repository
1. Discrimination & Toxicity2 - Post-deployment

Risks from bias and underrepresentation

The outputs and impacts of general- purpose AI systems can be biased with respect to various aspects of human identity, including race, gender, culture, age, and disability. This creates risks in high- stakes domains such as healthcare, job recruitment, and financial lending. General- purpose AI systems are primarily trained on language and image datasets that disproportionately represent English- speaking and Western cultures, increasing the potential for harm to individuals not represented well by this data.

Source: MIT AI Risk Repositorymit775

ENTITY

2 - AI

INTENT

2 - Unintentional

TIMING

2 - Post-deployment

Risk ID

mit775

Domain lineage

1. Discrimination & Toxicity

156 mapped risks

1.1 > Unfair discrimination and misrepresentation

Mitigation strategy

1. Collect Diverse and Representative Data Implement comprehensive data governance to ensure training datasets are diverse, representative of all target populations, and reflect a broad range of socio-demographic and cultural characteristics. This includes employing techniques like dataset augmentation, reweighting, and resampling during data pre-processing to mitigate the foundational bias of underrepresentation. 2. Employ Fairness-Aware Algorithms and Design Integrate fairness constraints directly into the model development process. Utilize bias-aware algorithms, such as adversarial debiasing, fair representation learning, or other in-processing methods, to actively minimize the influence of protected attributes (e.g., race, gender, age) on algorithmic decisions while maintaining predictive accuracy. 3. Establish Continuous Auditing and Human Oversight Ensure system-wide accountability by implementing regular, automated post-deployment monitoring and auditing of model outputs across all demographic groups. Furthermore, incorporate a human-in-the-loop mechanism to review high-stakes decisions and provide feedback loops to continuously identify and correct emergent biases in real-world use.