Back to the MIT repository
4. Malicious Actors & Misuse2 - Post-deployment

Weaponization capabilities

AI capabilities that could be deliberately weaponized for destructive purposes.

Source: MIT AI Risk Repositorymit1091

ENTITY

1 - Human

INTENT

1 - Intentional

TIMING

2 - Post-deployment

Risk ID

mit1091

Domain lineage

4. Malicious Actors & Misuse

223 mapped risks

4.2 > Cyberattacks, weapon development or use, and mass harm

Mitigation strategy

1. Establish binding international legal instruments to prohibit or stringently regulate the development and deployment of autonomous weapons systems that inherently lack meaningful human control, thereby addressing the highest-level risks of AI weaponization and conflict escalation. 2. Mandate the implementation of comprehensive AI Security Compliance and Governance Frameworks for general-purpose AI systems with systemic risk, requiring state-of-the-art model evaluations, rigorous risk assessments, and the adoption of secure-by-design principles to mitigate malicious actors' ability to exploit AI capabilities. 3. Prioritize investment and research in advanced Defensive AI technologies, coupled with continuous adversarial testing and model retraining, to enhance system robustness and cyber resilience against sophisticated AI-enabled attacks.