Back to the MIT repository
4. Malicious Actors & Misuse2 - Post-deployment

Real-world risks (Risks of misuse of dual-use items and technologies)

Due to improper use or abuse, AI can pose serious risks to national security, economic security, and public health security, such as greatly reducing the capability requirements for non-experts to design, synthesize, acquire, and use nuclear, biological, and chemical weapons and missiles; and designing cyber weapons that launch network attacks on a wide range of potential targets through methods like automatic vulnerability discovery and exploitation.

Source: MIT AI Risk Repositorymit701

ENTITY

1 - Human

INTENT

1 - Intentional

TIMING

2 - Post-deployment

Risk ID

mit701

Domain lineage

4. Malicious Actors & Misuse

223 mapped risks

4.2 > Cyberattacks, weapon development or use, and mass harm

Mitigation strategy

1. Prioritize Upstream Safety and Red-Teaming: Mandate and standardize pre-deployment safety evaluations for dual-use foundation models, utilizing independent, rigorous red-teaming exercises to proactively identify and mitigate vulnerabilities related to the intentional generation or acceleration of chemical, biological, or cyber-offensive capabilities. 2. Establish a Formal Misuse Risk Governance Framework: Implement a comprehensive AI Lifecycle Framework that includes continuous monitoring of misuse incidents, regular quantitative evaluation of the efficacy of detection and mitigation strategies, and the adaptive refinement of counter-misuse practices in alignment with high-level governmental and security standards. 3. Enhance Defensive Cyber and CBRN Capabilities: Accelerate the integration of AI-driven tools for both cyber defense (e.g., User and Entity Behavior Analytics for threat detection) and CBRN threat counter-measures, thereby increasing the speed and scope of defensive activities to neutralize advanced, AI-enabled malicious activity and increase network resilience.