Back to the MIT repository
4. Malicious Actors & Misuse3 - Other

Threats to human institutions and life

This group comprises 11% of the articles and centers on risks stemming from AI systems designed with malicious intent or that can end up in a threat to human life. It can be divided into two key themes: threats to law and democracy, and transhumanism.

Source: MIT AI Risk Repositorymit580

ENTITY

3 - Other

INTENT

3 - Other

TIMING

3 - Other

Risk ID

mit580

Domain lineage

4. Malicious Actors & Misuse

223 mapped risks

4.2 > Cyberattacks, weapon development or use, and mass harm

Mitigation strategy

1. Implement rigorous restrictions on access to and deployment of highly capable AI models and autonomous systems, particularly those with the potential for mass harm or critical infrastructure oversight, unless their safety, containment mechanisms, and human control features are demonstrably proven and externally audited. 2. Substantially increase investment in and mandate the application of AI safety and alignment research, focusing on developing provable methods for adversarial robustness, model interpretability (Explainable AI), and enforceable human autonomy through integrated override and intervention capabilities to counter goal drift and power-seeking optimization. 3. Establish comprehensive, international governance and regulatory frameworks, including independent oversight institutions, mandatory accountability and audit trails for AI-driven decisions, and coordinated policies to mitigate the systemic risks arising from the competitive "AI race" and the use of AI for widespread disinformation and manipulation of democratic processes.