CBRNE weaponization capability
The capacity to develop, produce, or effectively utilize Chemical, Biological, Radiological, Nuclear, and Explosive weapons. This includes the ability to significantly lower the barrier for humans or other entities to develop, produce, or utilize such weapons.
ENTITY
2 - AI
INTENT
1 - Intentional
TIMING
3 - Other
Risk ID
mit1470
Domain lineage
7. AI System Safety, Failures, & Limitations
7.2 > AI possessing dangerous capabilities
Mitigation strategy
1. Implement Capability-Based Governance and Access Control Establish rigorous governance frameworks to restrict access to advanced AI models (frontier models) based on their demonstrated capabilities, not solely on training compute. This includes enforcing strict access controls, such as 'know-your-customer' screenings and rate limiting for high-risk queries, and assigning legal liability to developers to incentivize safer development practices. 2. Mandate Continuous Adversarial Testing and Capability Removal Systematically conduct multi-method adversarial testing (red-teaming) focused on CBRNE misuse scenarios across the AI lifecycle, from pre-deployment to post-deployment. The objective is to proactively identify and remove dangerous capabilities, such as those that lower the technical barrier for synthesizing toxic agents or generating weapon design instructions. 3. Harness AI for Proactive Defense and Threat Detection Invest in the development and deployment of AI systems to enhance defensive mechanisms and counter-proliferation efforts. This includes utilizing AI-driven surveillance to detect anomalous activities in trade, transport, and customs data for illegal CBRN material trafficking, and leveraging predictive models to bolster biosecurity and optimize emergency preparedness and response protocols.