Back to the MIT repository
1. Discrimination & Toxicity2 - Post-deployment

Bias and Discrimination

as they claim to generate biased and discriminatory results, these AI systems have a negative impact on the rights of individuals, principles of adjudication, and overall judicial integrity

Source: MIT AI Risk Repositorymit468

ENTITY

2 - AI

INTENT

2 - Unintentional

TIMING

2 - Post-deployment

Risk ID

mit468

Domain lineage

1. Discrimination & Toxicity

156 mapped risks

1.1 > Unfair discrimination and misrepresentation

Mitigation strategy

1. Systematically audit and enhance training data quality by ensuring demographic diversity and representativeness, employing techniques such as reweighting or stratified sampling for underrepresented subgroups to prevent the propagation of historical or sampling bias into the model. 2. Integrate fairness-aware machine learning constraints during model training, utilizing in-processing methods such as adversarial debiasing, fair representation learning, or optimization against fairness metrics (e.g., Equalized Odds) to minimize differential predictive performance across sensitive attributes. 3. Institute a continuous monitoring and auditing framework for the deployed AI system, including real-time performance testing for statistical parity across sensitive groups and incorporating a human-in-the-loop oversight mechanism to review and override potentially biased, high-stakes decisions.

ADDITIONAL EVIDENCE

This can lead to undesirable effects such as algorithmic bias, racial discrimination, and a surge in incarceration rates due to an elevated dependence on these predictive algorithms within a specific criminal justice system, rendering such automated risk-assessment systems deeply concerning from constitutional, technical, and ethical standpoints