Living Database (MIT Risk Repository)
Updated 25/01/2026

Explore the Taxonomy

Showing 1368 of 1368 risks
Tap a risk to view details
Showing 1368 of 1368 risks
Type 1: Diffusion of responsibility
6. Socioeconomic and Environmental3 - Other
Societal-scale harm can arise from AI built by a diffuse collection of creators, where no one is uniquely accountable for the technology's creation or use, as in a classic tragedy of the commons.
Type 2: Bigger than expected
7. AI System Safety, Failures, & Limitations2 - Post-deployment
Harm can result from AI that was not expected to have a large impact at all, such as a lab leak, a surprisingly addictive open-source product, or an unexpected repurposing of a research prototype.
Type 3: Worse than expected
7. AI System Safety, Failures, & Limitations2 - Post-deployment
AI intended to have a large societal impact can turn out harmful by mistake, such as a popular product that creates problems and partially solves them only for its users.
Type 4: Willful indifference
6. Socioeconomic and Environmental2 - Post-deployment
As a side effect of a primary goal like profit or influence, AI creators can willfully allow it to cause widespread societal harms like pollution, resource depletion, mental illness, misinformation, or injustice.
Type 5: Criminal weaponization
4. Malicious Actors & Misuse2 - Post-deployment
One or more criminal entities could create AI to intentionally inflict harms, such as for terrorism or combating law enforcement.
Type 6: State Weaponization
4. Malicious Actors & Misuse2 - Post-deployment
AI deployed by states in war, civil war, or law enforcement can easily yield societal-scale harm
Harmful Content
1. Discrimination & Toxicity2 - Post-deployment
The LLM-generated content sometimes contains biased, toxic, and private information
Bias
1. Discrimination & Toxicity3 - Other
The training datasets of LLMs may contain biased information that leads LLMs to generate outputs with social biases
Toxicity
1. Discrimination & Toxicity2 - Post-deployment
Toxicity means the generated content contains rude, disrespectful, and even illegal information
Privacy Leakage
2. Privacy & Security2 - Post-deployment
Privacy Leakage means the generated content includes sensitive personal information
Untruthful Content
3. Misinformation2 - Post-deployment
The LLM-generated content could contain inaccurate information
Factuality Errors
3. Misinformation2 - Post-deployment
The LLM-generated content could contain inaccurate information which is factually incorrect
Faithfulness Errors
3. Misinformation3 - Other
The LLM-generated content could contain inaccurate information which is is not true to the source material or input used
Unhelpful Uses
4. Malicious Actors & Misuse2 - Post-deployment
Improper uses of LLM systems can cause adverse social impacts.
Academic Misconduct
4. Malicious Actors & Misuse2 - Post-deployment
Improper use of LLM systems (i.e., abuse of LLM systems) will cause adverse social impacts, such as academic misconduct.
Copyright Violation
6. Socioeconomic and Environmental2 - Post-deployment
LLM systems may output content similar to existing works, infringing on copyright owners.
Cyber Attacks
4. Malicious Actors & Misuse2 - Post-deployment
Hackers can obtain malicious code in a low-cost and efficient manner to automate cyber attacks with powerful LLM systems.
Software Vulnerabilities
2. Privacy & Security2 - Post-deployment
Programmers are accustomed to using code generation tools such as Github Copilot for program development, which may bury vulnerabilities in the program.
Software Security Issues
2. Privacy & Security1 - Pre-deployment
The software development toolchain of LLMs is complex and could bring threats to the developed LLM.
Programming Language
2. Privacy & Security1 - Pre-deployment
Most LLMs are developed using the Python language, whereas the vulnerabilities of Python interpreters pose threats to the developed models