Back to the MIT repository
1. Discrimination & Toxicity3 - Other

Ethical AI Risks

In the context of ethical AI risks, two risks are of particular importance. First, AI systems may lack a legitimate ethical basis in establishing rules that greatly influence society and human relationships (Wirtz & Müller, 2019). In addition, AI-based discrimination refers to an unfair treatment of certain population groups by AI systems. As humans initially programme AI systems, serve as their potential data source, and have an impact on the associated data processes and databases, human biases and prejudices may also become part of AI systems and be reproduced (Weyerer & Langer, 2019, 2020).

Source: MIT AI Risk Repositorymit308

ENTITY

3 - Other

INTENT

2 - Unintentional

TIMING

3 - Other

Risk ID

mit308

Domain lineage

1. Discrimination & Toxicity

156 mapped risks

1.0 > Discrimination & Toxicity

Mitigation strategy

1. Prioritize Data Integrity and Representativeness: Rigorously examine, preprocess, and sanitize all training and testing datasets to ensure they are diverse and representative across relevant demographic and social groups, thereby mitigating selection, measurement, and prejudice biases inherent in historical data. This foundational step is critical to prevent AI systems from replicating and perpetuating existing societal discrimination. 2. Implement Algorithmic Fairness and Continuous Testing: Employ mathematical techniques, referred to as "algorithmic fairness," to measure potential discriminatory effects across different population groups. Further, mandate extensive pre-deployment testing—including adversarial testing and evaluation for inappropriate feedback loops—to ensure the model’s robustness and reliability in achieving fair outcomes before it is operationalized. 3. Establish a Robust AI Governance and Human Oversight Framework: Institute a formal governance structure that clearly articulates ethical principles (Fairness, Transparency, Accountability) and integrates human monitoring and judgment at appropriate stages of the AI lifecycle. This ensures clear lines of responsibility, provides mechanisms for auditing AI-driven decisions, and maintains the capacity for human intervention to override automated decisions when ethical or contextual concerns arise.