Back to the MIT repository
5. Human-Computer Interaction2 - Post-deployment

AI enables automation of military decision-making

One concern here is humans not remaining in the loop for some military decisions, creating the possibility of unintentional escalation because of: • Automated tactical decision-making, by ‘in-theatre’ AI systems (e.g. border patrol systems start accidentally firing on one another), leading to either: tactical-level war crimes,11 or strategic-level decisions to initiate conflict or escalate to a higher level of intensity—for example, countervalue (e.g. city-) targeting, or going nuclear [62]. • Automated strategic decision-making, by ‘out-of-theatre’ AI systems—for example, conflict prediction or strategic planning systems giving a faulty ‘imminent attack’ warning [20].

Source: MIT AI Risk Repositorymit892

ENTITY

2 - AI

INTENT

2 - Unintentional

TIMING

2 - Post-deployment

Risk ID

mit892

Domain lineage

5. Human-Computer Interaction

92 mapped risks

5.2 > Loss of human agency and autonomy

Mitigation strategy

1. Implement legally binding, stringent Human-in-the-Loop (HITL) or Human-on-the-Loop (HOTL) protocols, ensuring that human commanders retain meaningful command and control, including the ability to veto or disengage autonomous systems prior to lethal or escalatory action. This directly preserves human agency in accordance with international humanitarian law and prevents purely automated escalation. 2. Establish a mandatory, independent Verification, Validation, and Testing (VV\&T) regime for all military AI systems, emphasizing robustness testing against novel and adversarial inputs, and conducting pre-deployment audits for algorithmic bias. This mitigates the risk of system failures, sensor degradation, and faulty "imminent attack" warnings leading to unintentional tactical-level war crimes or strategic miscalculation. 3. Formulate and empower an independent, interdisciplinary AI Ethics and Governance Board, accountable to the highest command authority, tasked with providing continuous oversight, policy formulation, and post-incident review. This ensures institutional accountability, drives a culture of safety, and facilitates the transparent setting of acceptable levels of autonomy and risk across all military domains.