Military Domains
Perhaps the most obvious and worrying instances of AI conflict are those in which human conflict is already a major concern, such as military domains (although other, less salient forms of conflict such as international trade wars are also cause for concern). For example, beyond applications of more narrow AI tools in lethal autonomous weapons systems (Horowitz, 2021), future AI systems might serve as advisors or negotiators in high-stakes military decisions (Black et al., 2024; Manson, 2024). Indeed, companies such as Palantir have already developed LLM-powered tools for military planning (Palantir, 2025), and the US Department of Defence has recently been evaluating models for such capacities, with personnel revealing that they “could be deployed by the military in the very near term” (Manson, 2023). The use of AI in command and control systems to gather and synthesise information – or recommend and even autonomously make decisions – could lead to rapid unintended escalation if these systems are not robust or are otherwise more conflict-prone (Johnson, 2021a; Johnson, 2020; Laird, 2020, see also Case Study 10).10
ENTITY
2 - AI
INTENT
3 - Other
TIMING
2 - Post-deployment
Risk ID
mit1212
Domain lineage
7. AI System Safety, Failures, & Limitations
7.6 > Multi-agent risks
Mitigation strategy
1. Mandate and enforce "Meaningful Human Control" (MHC) over all critical functions in military AI systems, particularly the final decision to use lethal force. This requires human operators to retain the capacity to exercise judgment and intervene at all stages of the Observe-Orient-Decide-Act (OODA) loop, preventing fully autonomous, unreviewed decisions that could lead to unintended escalation. 2. Require rigorous, system-level safety engineering, such as System Theoretic Process Analysis (STPA), to ensure AI systems are robust and their decision processes are transparent and auditable ("glass-box" approach). Implement compulsory "fail-safe" mechanisms that default to human control or system deactivation upon the detection of anomaly, adversarial manipulation, or data-quality degradation. 3. Establish and maintain dedicated, resilient channels for bilateral crisis communication (e.g., emergency hotlines/CATALINK) between major military powers. This cooperation must include developing joint, sequenced de-escalation protocols specifically designed to manage and resolve incidents involving AI-enabled weapon or Command, Control, and Communications (C3) systems.