AI enables development of weapons of mass destruction
AI is already enabling the development of weapons which could cause mass destruction —including new weapons that themselves use AI capabilities, such as Lethal Autonomous Weapons [2],10 and the potential use of AI to speed up the development of other potentially dangerous technologies, such as engineered pathogens (as discussed in Section 2).
ENTITY
1 - Human
INTENT
1 - Intentional
TIMING
2 - Post-deployment
Risk ID
mit891
Domain lineage
4. Malicious Actors & Misuse
4.2 > Cyberattacks, weapon development or use, and mass harm
Mitigation strategy
1. Implement Robust Frontier Model Governance and Safeguards This prioritizes the establishment of comprehensive, technical controls within the most capable "frontier" AI models, including refusal training, input and output filtering, and architectural circuit breakers to prevent the generation of information that could aid in the development of chemical, biological, radiological, or nuclear (CBRN) weapons. 2. Mandate Independent Security Stress Testing and Liability-Based Incentivization Require mandatory, high-stakes government-led or independent stress testing of AI models against WMD-related misuse scenarios, leveraging classified knowledge to ensure safeguard efficacy. Concurrently, develop a liability-based framework to incentivize developers to continually enhance and update their security protocols against emerging threats without stifling technological innovation. 3. Establish Strict Access Control and Biosecurity Chokepoint Screening Employ Know-Your-Customer (KYC) protocols for granting access to dual-use AI capabilities, particularly in life sciences, to deter malicious actors while allowing verified researchers legitimate use. For biological threats, this includes augmenting current nucleic acid synthesis screening methods to prevent the physical creation of AI-designed pathogens.