Back to the MIT repository
1. Discrimination & Toxicity2 - Post-deployment

Broken systems

These are the most mentioned cases. They refer to situations where the algorithm or the training data lead to unreliable outputs. These systems frequently assign disproportionate weight to some variables, like race or gender, but there is no transparency to this effect, making them impossible to challenge. These situations are typically only identified when regulators or the press examine the systems under freedom of information acts. Nevertheless, the damage they cause to people’s lives can be dramatic, such as lost homes, divorces, prosecution, or incarceration. Besides the inherent technical shortcomings, auditors have also pointed out “insufficient coordination” between the developers of the systems and their users as a cause for ethical considerations to be neglected. This situation raises issues about the education of future creators of AI-infused systems, not only in terms of technical competence (e.g., requirements, algorithms, and training) but also ethics and responsibility. For example, as autonomous vehicles become more common, moral dilemmas regarding what to do in potential accident situations emerge, as evidenced in this MIT experiment. The decisions regarding how the machines should act divides opinions and requires deep reflection and maybe regulation.

Source: MIT AI Risk Repositorymit57

ENTITY

2 - AI

INTENT

2 - Unintentional

TIMING

2 - Post-deployment

Risk ID

mit57

Domain lineage

1. Discrimination & Toxicity

156 mapped risks

1.1 > Unfair discrimination and misrepresentation

Mitigation strategy

1. Prioritize Diverse and Representative Data Collection and Preprocessing: Implement a comprehensive data governance framework mandating the use of training datasets that accurately reflect the target population's demographic diversity, utilizing techniques such as resampling, reweighting, and anonymization to prevent the embedding of systemic and historical biases into the model. 2. Enforce Model Transparency and Explainability Mechanisms: Establish strict requirements for model interpretability (XAI) and transparency to document how predictions are made, allowing for rigorous auditing and challenge of disproportionate or discriminatory outcomes by regulators, auditors, and affected individuals. This must be coupled with mandatory human-in-the-loop oversight for high-stakes decisions. 3. Establish Robust AI Governance and Accountability Structures: Develop and enforce an organization-wide ethical framework and clear policies that integrate bias mitigation throughout the entire AI lifecycle. This includes mandatory independent audits, impact assessments, and fostering cross-functional collaboration between technical developers, ethicists, and domain experts to ensure ethical considerations are prioritized and insufficient coordination is eliminated.