Back to the MIT repository
1. Discrimination & Toxicity1 - Pre-deployment

Fairness

This challenge appears when the learning model leads to a decision that is biased to some sensitive attributes... data itself could be biased, which results in unfair decisions. Therefore, this problem should be solved on the data level and as a preprocessing step

Source: MIT AI Risk Repositorymit598

ENTITY

2 - AI

INTENT

2 - Unintentional

TIMING

1 - Pre-deployment

Risk ID

mit598

Domain lineage

1. Discrimination & Toxicity

156 mapped risks

1.3 > Unequal performance across groups

Mitigation strategy

- Implement inclusive data curation protocols to ensure training datasets are systematically representative of all relevant demographic subgroups, thereby addressing foundational representation and selection biases at the source. - Apply data transformation techniques such as Resampling (e.g., oversampling under-represented groups or undersampling privileged groups) or Reweighing to the training instances to adjust the group distributions and enforce fairness metrics like statistical parity. - Utilize Fair Representation Learning (FRL) methods to map the raw data into a latent feature space that is maximally predictive of the target outcome but minimally encodes information related to protected attributes.