Back to the MIT repository
1. Discrimination & Toxicity1 - Pre-deployment

Bias

The AI will only be as good as the data it is trained with. If the data contains bias (and much data does), then the AI will manifest that bias, too.

Source: MIT AI Risk Repositorymit88

ENTITY

2 - AI

INTENT

2 - Unintentional

TIMING

1 - Pre-deployment

Risk ID

mit88

Domain lineage

1. Discrimination & Toxicity

156 mapped risks

1.1 > Unfair discrimination and misrepresentation

Mitigation strategy

1. Mandate Representative Data Curation: Implement a systematic process to collect and curate training datasets that achieve demographic and statistical parity across all protected attributes. This includes employing techniques such as targeted data augmentation, upsampling of minority groups, or synthetic data generation to ensure the training corpus is maximally representative of the deployment population. 2. Integrate Fairness-Aware Algorithmic Constraints: Incorporate in-processing mitigation strategies during model training by adjusting the optimization function or loss calculation. Utilize fairness-aware regularization (e.g., MinDiff, Counterfactual Logit Pairing, or adversarial debiasing) to penalize discrepancies in prediction distributions or outcomes between sensitive data slices, thereby enforcing criteria like equalized odds or demographic parity. 3. Conduct Pre-Deployment Bias Auditing and Feature Analysis: Perform rigorous, multi-methodological bias audits on both the training data and the model-in-development. This must include causal analysis to identify and either remove or transform proxy variables that are unintentionally correlated with protected attributes, ensuring that human cognitive bias is not introduced via subjective labeling or feature selection.