Back to the MIT repository
4. Malicious Actors & Misuse2 - Post-deployment

Lethal Autonomous Weapons (LAW)

What is debated as an ethical issue is the use of LAW — AI-driven weapons that fully autonomously take actions that intentionally kill humans.

Source: MIT AI Risk Repositorymit94

ENTITY

2 - AI

INTENT

1 - Intentional

TIMING

2 - Post-deployment

Risk ID

mit94

Domain lineage

4. Malicious Actors & Misuse

223 mapped risks

4.2 > Cyberattacks, weapon development or use, and mass harm

Mitigation strategy

1. Negotiate and adopt a new, legally binding international treaty to establish an explicit and preemptive prohibition on autonomous weapon systems that lack meaningful human control over the critical functions of target selection and engagement, particularly systems designed to target humans, thereby upholding the principles of humanity and addressing the 'responsibility gap'. 2. Mandate the retention of human judgment and a clear chain of command and responsibility throughout the entire life cycle of any remaining autonomous weapon systems, complemented by stringent operational constraints such as limiting target types, geographical scope, and duration of engagement. 3. Establish a rigorous, mandatory legal and ethical review process (per IHL Article 36) for all new autonomous weapon systems, with a particular focus on ensuring technical predictability, reliability, and the systematic identification and mitigation of algorithmic biases that could lead to non-compliance with international human rights and humanitarian law.