Back to the MIT repository
4. Malicious Actors & Misuse2 - Post-deployment

Failures in or misuse of intermediary (non-AGI) AI systems, resulting in catastrophe

Deployment of “prepotent” AI systems that are non-general but capable of outperforming human collective efforts on various key dimensions;170 → Militarization of AI enabling mass attacks using swarms of lethal autonomous weapons systems;171 → Military use of AI leading to (intentional or unintentional) nuclear escalation, either because machine learning systems are directly integrated in nuclear command and control systems in ways that result in escalation172 or because conventional AI-enabled systems (e.g., autonomous ships) are deployed in ways that result in provocation and escalation;173 → Nuclear arsenals serving as an arsenal “overhang” for advanced AI systems;174 → Use of AI to accelerate research into catastrophically dangerous weapons (e.g., bioweapons);175

Source: MIT AI Risk Repositorymit870

ENTITY

3 - Other

INTENT

3 - Other

TIMING

2 - Post-deployment

Risk ID

mit870

Domain lineage

4. Malicious Actors & Misuse

223 mapped risks

4.2 > Cyberattacks, weapon development or use, and mass harm

Mitigation strategy

1. **Implement and Enforce Strict Dual-Use Access Controls:** Apply stringent technical and legal controls, such as "know-your-customer" (KYC) screening and compute monitoring, to restrict access to advanced AI models with capabilities that could accelerate the development or proliferation of biological, chemical, or autonomous weapons. Simultaneously, remove or severely limit such dangerous capabilities from general-purpose AI systems before deployment. 2. **Establish International AI Military Governance:** Proactively develop and promote international norms, treaties, and confidence-building measures to prevent the unintended escalation or catastrophic failure resulting from the militarization of AI, particularly concerning lethal autonomous weapons systems (LAWS) and the integration of machine learning into nuclear command and control (NC3) infrastructure. 3. **Mandate Rigorous Red Teaming and Security Audits:** Require comprehensive, independent pre-deployment and continuous-lifecycle safety assessments and adversarial testing (red teaming) for all AI systems destined for high-stakes domains, focusing on vulnerabilities to prompt injection, data poisoning, and manipulation that could be exploited by malicious actors to cause mass harm or operational failure.