MIT AI risk domains
Explore the MIT repository by domain through dedicated hub pages that link to canonical risk detail URLs.
156 risks
1. Discrimination & Toxicity
Risks of bias, toxicity, discriminatory harm, and systemic exclusion in AI systems.
Open domain hub186 risks
2. Privacy & Security
Risks of data leakage, attacks, system compromise, and misuse of sensitive information.
Open domain hub74 risks
3. Misinformation
Risks of misleading content, narrative manipulation, and degradation of the information environment.
Open domain hub223 risks
4. Malicious Actors & Misuse
Risks created by deliberate malicious use of AI tools and capabilities.
Open domain hub92 risks
5. Human-Computer Interaction
Risks at the human-system interface, including dependence, deception, and erosion of agency.
Open domain hub262 risks
6. Socioeconomic and Environmental
Distributional, institutional, and environmental risks created by AI deployment.
Open domain hub375 risks
7. AI System Safety, Failures, & Limitations
Risks of failures, unsafe behavior, and operational limits in AI systems and models.
Open domain hub