Back to the MIT repository
1. Discrimination & Toxicity2 - Post-deployment

Opportunity loss

Opportunity loss occurs when algorithmic systems enable disparate access to information and resources needed to equitably participate in society, including the withholding of housing through targeting ads based on race [10] and social services along lines of class [84]

Source: MIT AI Risk Repositorymit141

ENTITY

2 - AI

INTENT

2 - Unintentional

TIMING

2 - Post-deployment

Risk ID

mit141

Domain lineage

1. Discrimination & Toxicity

156 mapped risks

1.1 > Unfair discrimination and misrepresentation

Mitigation strategy

1. Proactively curating and augmenting training datasets to ensure demographic and contextual representativeness. This includes implementing data preprocessing techniques such as reweighting or synthetic sampling to mitigate historical and collection biases, which are the root cause of disparate input for resource allocation models. 2. Employing fairness-aware algorithmic constraints and optimization techniques during model development (in-processing). Specifically, utilize methods like adversarial debiasing, regularization with fairness terms, or loss function adjustments (e.g., MinDiff) to minimize the statistical dependence between protected attributes and the resulting resource allocation decisions, thereby optimizing for equitable opportunity parity. 3. Establishing a continuous post-deployment governance framework that mandates regular, rigorous audits of the system's outputs using disparate impact metrics. This framework must include transparent monitoring, accessible feedback loops for affected individuals, and defined procedures for human intervention to correct observed instances of opportunity loss or economic detriment.

ADDITIONAL EVIDENCE

Systems. . . wrongfully deny welfare benefits, kidney transplants, and mortgages to individuals of color as compared to white counterparts