Back to the MIT repository
4. Malicious Actors & Misuse2 - Post-deployment

Biological and Chemical Risks

The dual-use nature of AI technology presents a critical risk by significantly lowering technical thresholds for malicious non-state actors to design, synthesize, acquire, and deploy CBRNE (Chemical, Biological, Radiological, Nuclear, and Explosive) weapons. This capability poses unprecedented challenges to national security, international non-proliferation regimes, and global security governance.

Source: MIT AI Risk Repositorymit1446

ENTITY

1 - Human

INTENT

1 - Intentional

TIMING

2 - Post-deployment

Risk ID

mit1446

Domain lineage

4. Malicious Actors & Misuse

223 mapped risks

4.2 > Cyberattacks, weapon development or use, and mass harm

Mitigation strategy

1. Mandatory Technical Safeguards and Red Teaming Implement rigorous, government-mandated stress testing and closed red-team evaluations for dual-use foundation models and Biological AI Models (BAIMs) prior to deployment to systematically assess their capability to assist in Chemical, Biological, Radiological, and Nuclear (CBRN) weapon development. Embed robust model-level safeguards, such as refusal training, input/output filtering, and circuit breakers, to significantly resist attempts at circumvention and the elicitation of dangerous information by malicious actors. 2. Adaptive Access Control and Weight Restrictions Establish "Know-Your-Customer" (KYC) protocols for advanced, high-risk biological and chemical AI models to verify the identity and legitimate purpose of users, restricting access for anonymous or unverified entities. Concurrently, develop and enforce standards that limit or prohibit the open release of model weights for systems demonstrated to possess substantial dual-use capabilities, especially those that could facilitate the discovery or synthesis of novel toxins and pathogens. 3. Iterative and Coordinated Governance Frameworks Develop and implement a comprehensive, adaptive national and international governance framework to address the dual-use challenges posed by AI. This framework must clearly define roles and responsibilities across executive agencies, encourage industry best practices—including the testing of models for CBRN risks—and ensure the regulatory structure remains agile enough to respond to the revolutionary pace of technological advancements in AI and biotechnology.