Back to the MIT repository
1. Discrimination & Toxicity1 - Pre-deployment

Bias and discrimination (value embedding)

Generative AI models may also be subject to the “value embedding” phenomenon.361 “Value embedding” refers to the fact that developers of generative AI models strive to minimize biased outputs by retraining their models based on normative values.362 Contemporary state-of- the-art models not only reflect the values embedded within their training data, they also undergo additional fine-tuning that follows a set of chosen rules and principles. Due to the absence of universally accepted standards, developers bear the responsibility of making decisions on sensitive issues. These practices lead to concerns that a developer’s ideology and vision of the world are embedded in the model. This generates a risk that the model incorporates values that are either unrepresentative of certain segments of the population or that offer a static, oversimplified reflection of global cultural norms and evolving social views.

Source: MIT AI Risk Repositorymit737

ENTITY

1 - Human

INTENT

2 - Unintentional

TIMING

1 - Pre-deployment

Risk ID

mit737

Domain lineage

1. Discrimination & Toxicity

156 mapped risks

1.3 > Unequal performance across groups

Mitigation strategy

1. Establish a comprehensive, mandated governance framework that defines clear ethical principles and policies for generative AI development, including a requirement for human-in-the-loop validation of critical AI-assisted decisions to mitigate the risk of relying solely on unrepresentative, embedded values. 2. Implement pluralistic and context-aware value alignment strategies during fine-tuning, utilizing broadly representative datasets and technical mechanisms (e.g., personalized fine-tuning tokens, community-updated model subsets) to integrate and reflect diverse cultural norms and individual preferences rather than a static, oversimplified global standard. 3. Conduct continuous bias detection and auditing throughout the AI lifecycle, including pre-deployment red teaming with counterfactual data augmentation and post-deployment monitoring using fairness metrics to systematically identify and correct algorithmic bias that results in unequal performance or discriminatory outputs across demographic groups.