Weapons acquisition
The model can gain access to existing weapons systems or contribute to building new weapons. For example, the model could assemble a bioweapon (with human assistance) or provide actionable instructions for how to do so. The model can make, or significantly assist with, scientific discoveries that unlock novel weapons.
ENTITY
2 - AI
INTENT
1 - Intentional
TIMING
2 - Post-deployment
Risk ID
mit441
Domain lineage
7. AI System Safety, Failures, & Limitations
7.2 > AI possessing dangerous capabilities
Mitigation strategy
1. Implement strict access control mechanisms and conduct mandatory know-your-customer (KYC) screenings for frontier AI models that possess specialized biological, chemical, or advanced engineering capabilities to prevent unauthorized access by malicious non-state actors. 2. Integrate comprehensive risk modeling and mitigation protocols into the entire AI product and weapons acquisition lifecycle, ensuring that potential misuse risks are identified and ameliorated *ex ante* during the design and development phases. 3. Mandate the removal or severe restriction of specific dangerous capabilities—such as generating actionable instructions for bioweapon assembly or designing novel toxins—from general-purpose AI systems prior to public or widespread operational deployment.
ADDITIONAL EVIDENCE
Most of the capabilities listed are offensive capabilities: they are useful for exerting influence or threatening security (e.g. see: persuasion and manipulation, cyber-offense, weapons acquisition).