Security
Implications of the weaponization of AI for defence (the embeddedness of AI-based capabilities across the land, air, naval and space domains may affect combined arms operations).
ENTITY
1 - Human
INTENT
1 - Intentional
TIMING
2 - Post-deployment
Risk ID
mit640
Domain lineage
4. Malicious Actors & Misuse
4.2 > Cyberattacks, weapon development or use, and mass harm
Mitigation strategy
1. Establish Enforceable International Governance and Standards Prioritize the development of a global regulatory framework, ideally through international collaboration, to mandate baseline safety standards for AI systems in defense, ensure the integration of AI safety measures into national policies, and institute independent expert evaluations and audits to mitigate catastrophic risks and curb an unconstrained AI arms race 2. Mandate Human-Centric Control and Explainability Implement strict policy and technical requirements to ensure human operators retain final authority over the use of lethal force (Human-in-the-Loop) and to enforce high levels of transparency, traceability, and explainability (XAI) in military AI systems, thereby preventing operator over-reliance and mitigating the risk of accidental escalation due to system failures or non-aligned decision-making 3. Invest in Adversarial AI Robustness and Defensive Capabilities Dedicate significant resources to embedding security-by-design principles and implementing continuous adversarial testing and training throughout the AI system lifecycle. Simultaneously, accelerate the development and integration of advanced AI-powered defensive capabilities, such as User and Entity Behavior Analytics (UEBA) and real-time threat tracing, to autonomously detect and counter highly adaptive, AI-driven offensive cyber and unconventional weapons threats