Back to the MIT repository
1. Discrimination & Toxicity2 - Post-deployment

Risk of Injury

Poorly designed intelligent systems can cause moral, psychological, and physical harm. For example, the use of predictive policing tools may cause more people to be arrested or physically harmed by the police.

Source: MIT AI Risk Repositorymit127

ENTITY

1 - Human

INTENT

2 - Unintentional

TIMING

2 - Post-deployment

Risk ID

mit127

Domain lineage

1. Discrimination & Toxicity

156 mapped risks

1.1 > Unfair discrimination and misrepresentation

Mitigation strategy

1. Implement rigorous algorithmic fairness and bias audits, particularly for systems used in high-stakes contexts like law enforcement, ensuring that training datasets are representative and do not reinforce systemic biases that lead to discriminatory outcomes or over-policing of marginalized communities. 2. Establish "Human-in-the-Loop" systems and robust governance frameworks for high-risk AI applications, mandating human oversight and override capabilities at critical decision points to prevent autonomous decisions that could result in physical or psychological harm to individuals. 3. Require full transparency and continuous, independent auditing of the risk factors, decision rationale, and real-world societal impact of deployed intelligent systems to build trust and ensure ongoing compliance with human rights and non-discrimination principles.