Back to the MIT repository
4. Malicious Actors & Misuse2 - Post-deployment

Weaponization

weaponizing AI may be an onramp to more dangerous outcomes. In recent years, deep RL algorithms can outperform humans at aerial combat [18], AlphaFold has discovered new chemical weapons [66], researchers have been developing AI systems for automated cyberattacks [11, 14], military leaders have discussed having AI systems have decisive control over nuclear silos

Source: MIT AI Risk Repositorymit569

ENTITY

1 - Human

INTENT

1 - Intentional

TIMING

2 - Post-deployment

Risk ID

mit569

Domain lineage

4. Malicious Actors & Misuse

223 mapped risks

4.2 > Cyberattacks, weapon development or use, and mass harm

Mitigation strategy

1. Mandate and verify the implementation of 'meaningful human control' and 'appropriate levels of human judgment' in all AI applications pertaining to nuclear command-and-control and high-consequence military decision-making to mitigate the risk of accidental escalation or autonomous weapon deployment. 2. Institute rigorous dual-use research governance frameworks, including mandatory pre-publication risk assessments and controlled openness models for highly consequential AI methodologies, to prevent the malicious exploitation of scientific advancements for the development of novel chemical or biological weapons. 3. Enforce comprehensive 'AI Security Compliance' programs, including adversarial testing (red teaming), formal verification, and the adoption of AI-aware defensive solutions for all high-risk AI systems in both government and critical private infrastructure to reduce the impact and success rate of automated cyberattacks.