Hazardous Biological and Chemical Technologies
AI systems such as LLMs, chemical LLMs (Skinnider et al., 2021; Moret et al., 2023), and other LLM- based biological design tools might soon facilitate the production of bioweapons, chemical weapons, and other hazardous technologies. In particular, LLMs might enable actors with less expertise to more easily synthesize dangerous pathogens, while customized chemical and biological design tools might be more concerning in terms of expanding the capabilities of sophisticated actors (e.g. states) (Sandbrink, 2023). Gopal et al. (2023) and Soice et al. (2023) demonstrated that people with little background could use LLMs to help make progress towards developing pathogens such as the 1918 pandemic influenza. However, recent studies suggest that current LLMs are not more helpful than internet search in this regard (Mouton et al., 2024; Patwardhan et al., 2024).
ENTITY
1 - Human
INTENT
1 - Intentional
TIMING
2 - Post-deployment
Risk ID
mit1493
Domain lineage
4. Malicious Actors & Misuse
4.2 > Cyberattacks, weapon development or use, and mass harm
Mitigation strategy
1. Implement tiered Access Control and Staged Deployment protocols to strictly govern access to LLMs exhibiting novel biological or chemical design capabilities, ensuring initial use is limited to verified researchers under controlled conditions. 2. Deploy Robust Behavioral Alignment and Technical Guardrails—including specialized system prompts, refusal mechanisms, and content filtering—to systematically prevent the elicitation of hazardous biological and chemical synthesis instructions by shaping the model's output behavior. 3. Institute Continuous Dual-Use Capability Evaluation and Monitoring via rigorous, domain-specific benchmarks and efficacy assessments to proactively identify model vulnerabilities, measure the performance of existing counter-misuse strategies, and facilitate adaptive defense refinement.