Back to the MIT repository
1. Discrimination & Toxicity3 - Other

Erasing social groups

people, attributes, or artifacts associated with specific social groups are systematically absent or under-represented... Design choices [143] and training data [212] influence which people and experiences are legible to an algorithmic system

Source: MIT AI Risk Repositorymit136

ENTITY

1 - Human

INTENT

2 - Unintentional

TIMING

3 - Other

Risk ID

mit136

Domain lineage

1. Discrimination & Toxicity

156 mapped risks

1.3 > Unequal performance across groups

Mitigation strategy

1. Mandate the use of diverse and representative training data sets, actively employing reweighting, resampling, or targeted data augmentation to ensure equitable coverage of all relevant social and intersectional groups, thereby directly counteracting systematic under-representation and erasure. 2. Integrate rigorous, group-specific fairness metrics and representational harm measurement techniques (e.g., for erasing or vagueness) into the model development lifecycle to quantitatively assess performance disparities and identify algorithmic choices that fail to acknowledge the identity or causes of specific groups in the output. 3. Establish a continuous AI governance strategy that includes regular, repeatable risk assessments and auditing of model behavior in deployment to detect bias drift and ensure accountability, with detailed documentation of all human and algorithmic decisions related to training data sourcing and output policies.

ADDITIONAL EVIDENCE

I'm in a lesbian partnership right now and wanting to get married and envisioning a wedding [...] and I'm so sick of [searching for 'lesbian wedding' and seeing] these straight weddings