Back to the MIT repository
4. Malicious Actors & Misuse2 - Post-deployment

Dual Use Science risks

Frontier AI systems have the potential to accelerate advances in the life sciences, from training new scientists to enabling faster scientific workflows. While these capabilities will have tremendous beneficial applications, there is a risk that they can be used for malicious purposes, such as for the development of biological or chemical weapons.

Source: MIT AI Risk Repositorymit1380

ENTITY

1 - Human

INTENT

1 - Intentional

TIMING

2 - Post-deployment

Risk ID

mit1380

Domain lineage

4. Malicious Actors & Misuse

223 mapped risks

4.2 > Cyberattacks, weapon development or use, and mass harm

Mitigation strategy

1. Establish and Enforce Critical Capability Thresholds (CCTs) Implement rigorous pre-deployment capability evaluations to define and monitor specific levels of AI competence that could substantially lower the barrier to developing chemical, biological, radiological, or nuclear (CBRN) threats. Development or deployment of a model must be halted or immediately subjected to maximum safeguards if it is assessed to have reached or exceeded a CCT (Source 10, 11). 2. Implement Layered Deployment and Inference Controls Employ a defense-in-depth strategy incorporating both access management and automated monitoring. This includes restricting access to the most capable models (e.g., via limited APIs, verified identity) and applying Detection and Intervention Mitigations to monitor user inputs and outputs for patterns indicative of malicious intent, such as requests for novel toxin design or synthesis pathways for biological weapons (Source 8, 11). 3. Integrate AI Governance with Existing Dual Use Research Oversight Require Principal Investigators (PIs) and AI developers whose work intersects with life sciences to adhere to established Dual Use Research of Concern (DURC) policies and processes. This necessitates mandatory risk assessment, the development of a Dual Use Research Mitigation Plan (DURMP) reviewed by an Institutional Review Entity (IRE), and explicit training on the biosecurity implications of AI-accelerated research (Source 1, 7).