Back to the MIT repository
1. Discrimination & Toxicity2 - Post-deployment

Data Breach/Privacy & Liberty

The risks associated with the use of AI are still unpredictable and unprecedented, and there are already several examples that show AI has made discriminatory decisions against minorities, reinforced social stereotypes in Internet search engines and enabled data breaches.

Source: MIT AI Risk Repositorymit128

ENTITY

2 - AI

INTENT

2 - Unintentional

TIMING

2 - Post-deployment

Risk ID

mit128

Domain lineage

1. Discrimination & Toxicity

156 mapped risks

1.1 > Unfair discrimination and misrepresentation

Mitigation strategy

1. Establish a Robust AI Governance and Risk Management Framework Implement comprehensive organizational structures, policies, and ethical frameworks, including a "responsible AI by design" approach, to ensure clear accountability, transparency, and non-discrimination throughout the entire AI lifecycle. 2. Mandate Continuous Bias Assessment and Auditing Conduct regular, rigorous fairness audits, impact assessments (e.g., Fundamental Rights), and technical bias testing using algorithmic fairness metrics both pre-deployment and continuously post-deployment to detect, measure, and mitigate unfair discrimination and misrepresentation. 3. Ensure Data Quality, Diversity, and Representativeness Implement processes to audit training datasets for historical, representation, and stereotyping biases, and actively employ data collection practices or augmentation techniques (e.g., synthetic data) to ensure the data accurately reflects the target population and minimizes discriminatory patterns.