Back to the MIT repository
4. Malicious Actors & Misuse2 - Post-deployment

Lethal Autonomous Weapons Systems (LAWS)

LAWS are a distinctive category of weapon systems that employ sensor arrays and computer algorithms to detect and attack a target without direct human intervention in the system’s operation

Source: MIT AI Risk Repositorymit472

ENTITY

2 - AI

INTENT

1 - Intentional

TIMING

2 - Post-deployment

Risk ID

mit472

Domain lineage

4. Malicious Actors & Misuse

223 mapped risks

4.2 > Cyberattacks, weapon development or use, and mass harm

Mitigation strategy

1. Prioritize the urgent negotiation and adoption of a legally binding international instrument to establish clear prohibitions on autonomous weapons systems that inherently lack meaningful human control and to strictly regulate all other forms of Lethal Autonomous Weapons Systems (LAWS). This measure is critical to avert a destabilizing arms race and address fundamental ethical and legal concerns. 2. Mandate the retention of *Meaningful Human Control* (MHC) over all critical functions of autonomous weapons systems, specifically the selection and engagement of targets, across the entire life cycle of the weapon system. This ensures human judgment and accountability remain paramount, upholding compliance with International Humanitarian Law principles such as Distinction and Proportionality. 3. Implement stringent operational and technical safeguards, including limiting the types of targets and the number of engagements permitted by LAWS. This must be complemented by comprehensive training, doctrine, and review processes for human operators and commanders to ensure a deep understanding of the system's autonomy, capabilities, and limitations in dynamic, realistic operational environments.

ADDITIONAL EVIDENCE

The delegation of decision-making to automated weapons inevitably raises various concerns, including accountability, appropriateness, potential unintended escalation due to imminent ac-cidents, ethical quandaries