Back to the MIT repository
4. Malicious Actors & Misuse3 - Other

Warfare

The dangers of AI amplifying the effectiveness/failures of nuclear, chemical, biological, and radiological weapons.

Source: MIT AI Risk Repositorymit1044

ENTITY

2 - AI

INTENT

3 - Other

TIMING

3 - Other

Risk ID

mit1044

Domain lineage

4. Malicious Actors & Misuse

223 mapped risks

4.2 > Cyberattacks, weapon development or use, and mass harm

Mitigation strategy

1. Institute mandatory global regulatory frameworks, leveraging models such as the Financial Action Task Force (FATF), to establish consistent baseline safety standards for all high-capability, general-purpose AI models ("frontier models"). This framework must include independent expert auditing and explicit assessment of risks related to chemical, biological, radiological, and nuclear (CBRN) misuse prior to deployment. 2. Enforce doctrinal and engineering requirements that mandate meaningful human judgment and oversight in all critical decision-making processes involving the deployment or engagement of weapons systems, particularly those with catastrophic potential (e.g., nuclear command and control), to prevent accidental escalation or technical failure. 3. Require comprehensive pre-release testing protocols for advanced AI systems, including aggressive red-teaming and adversarial testing, to proactively identify and mitigate vulnerabilities related to the facilitation of unconventional weapon development or compromise by malicious actors.