Bias and fairness
Participants were concerned that AI systems might perpetuate current prejudices and discrimination, notably in hiring, lending and law enforcement. They stressed the importance of designers creating AI systems that favour justice and avoid biases. The possibility that AI systems may unwittingly perpetuate existing prejudices and discrimination, particularly in sensitive industries such as employment, lending and law enforcement, raises ethical concerns about AI as well as bias and justice issues (Table 1). Because AI systems are trained on historical data, they may inherit and reproduce biases from previous datasets. As a result, AI judgements may have an unjust impact on specific populations, increasing socioeconomic inequalities and fostering discriminatory practises. Participants in the research emphasize the need of AI developers creating systems that promote justice and actively seek to minimise biases.
ENTITY
2 - AI
INTENT
2 - Unintentional
TIMING
3 - Other
Risk ID
mit588
Domain lineage
1. Discrimination & Toxicity
1.1 > Unfair discrimination and misrepresentation
Mitigation strategy
1. Prioritize Diverse Data Curation and Pre-processing: Systematically audit and curate training datasets to ensure demographic representation, conducting rigorous checks for historical and societal biases. Employ data pre-processing techniques such as reweighting, resampling, or synthetic data generation to mitigate the reproduction of existing inequalities inherited from unrepresentative historical data. 2. Integrate Fairness-Aware Algorithmic Constraints: Incorporate fairness metrics and algorithmic constraints (e.g., adversarial debiasing, fair representation learning, or regularization terms) directly into the model training objective function. This ensures the model's design actively minimizes statistical disparities and systematic favoritism across identified protected or sensitive groups. 3. Establish Continuous Bias Auditing and Governance: Implement independent, continuous post-deployment monitoring and auditing using fairness metrics and performance testing across different demographic groups. Couple this with robust governance mechanisms that mandate transparency (e.g., explainable AI) and establish human-in-the-loop review to override or correct biased AI-generated decisions in sensitive, high-stakes contexts.