Back to the MIT repository
5. Human-Computer Interaction2 - Post-deployment

AI-induced strategic instability

For example, AI could undermine nuclear strategic stability by making it easier to discover and destroy previously secure nuclear launch facilities [30, 46, 49]. AI may also offer more extreme first-strike advantages or novel destructive capabilities that could disrupt deterrence, such as cyber capabilities being used to knock out opponents’ nuclear command and control [15, 29]. The use of AI capabilities may make it less clear where attacks originate from, making it easier for aggressors to obfuscate an attack, and therefore reducing the costs of initiating one. By making it more difficult to explain their military decisions, AI may give states a carte blanche to act more aggressively [20]. By creating a wider and more vulnerable attack surface, AI-related infrastructure may make war more tempting by lowering the cost of offensive action (for example, it might be sufficient to attack just data centres to do substantial harm), or by creating a ‘use-them-or- lose-them’ dynamic around powerful yet vulnerable military AI systems. In this way, AI could exacerbate the ‘capability- vulnerability paradox’ [22], where the very digital technologies that make militaries effective on the battlefield also introduce critical new vulnerabilities.

Source: MIT AI Risk Repositorymit893

ENTITY

2 - AI

INTENT

2 - Unintentional

TIMING

2 - Post-deployment

Risk ID

mit893

Domain lineage

5. Human-Computer Interaction

92 mapped risks

5.2 > Loss of human agency and autonomy

Mitigation strategy

1. Mandate Human-in-the-Loop Oversight for Critical Functions: Implement robust technical and policy safeguards to ensure human judgment and final control remain central in all strategic and nuclear decision-making processes, thereby mitigating the risk of automation bias, algorithmic miscalculation, or inadvertent escalation (Source 4, 15, 16, 18). 2. Implement Confidence-Building Measures (CBMs) and Transparency: Establish international norms and mechanisms for information-sharing, notification, and monitoring regarding the development and deployment of military AI systems to reduce the risk of misperception, foster trust, and reinforce strategic stability (Source 1, 2, 17). 3. Establish Constraints on High-Risk AI Applications: Pursue policy dialogue and agreements to restrict or prohibit the use of AI in mission areas that pose the highest risk to strategic stability, such as fully autonomous nuclear weapons systems or those functions that compress decision timelines (Source 15, 19).