Malicious use and abuse (military applications)
The advancement of AI for military purposes is rapidly ushering in a new phase of growth in military technology. Lethal Autonomous Weapons Systems (LAWS) possess the capability to detect, engage, and eliminate human targets independently, without human input.341 In 2020, a sophisticated AI agent surpassed experienced F-16 pilots in multiple simulated aerial combat scenarios, notably achieving a 5-0 victory against a human pilot through “aggressive and precise maneuvers” that the human could not surpass.342 Additionally, fully autonomous drones are already operational.
ENTITY
1 - Human
INTENT
1 - Intentional
TIMING
2 - Post-deployment
Risk ID
mit734
Domain lineage
4. Malicious Actors & Misuse
4.2 > Cyberattacks, weapon development or use, and mass harm
Mitigation strategy
1. Mandate and enforce "meaningful human control" over all critical functions of Lethal Autonomous Weapon Systems (LAWS) to ensure that a human operator retains the ability to intervene, override, and bear legal and ethical responsibility for decisions concerning the application of lethal force. 2. Establish and execute a rigorous lifecycle governance framework, requiring continuous testing, evaluation, and recertification of military AI systems to guarantee technical reliability, predictability, and retention of safety features against adaptive adversaries and evolving operational contexts. 3. Pursue binding international agreements to establish norms of restraint, accelerate AI arms control discussions, and adopt a unified regulatory approach that prohibits high-risk applications of LAWS while establishing clear technical and ethical standards for others.