Back to the MIT repository
6. Socioeconomic and Environmental3 - Other

Structural

Structural risks are concerned with how AI technologies shape and are shaped by the environments in which they are developed and deployed

Source: MIT AI Risk Repositorymit101

ENTITY

3 - Other

INTENT

3 - Other

TIMING

3 - Other

Risk ID

mit101

Domain lineage

6. Socioeconomic and Environmental

262 mapped risks

6.0 > Socioeconomic & Environmental

Mitigation strategy

1. Establish and ratify binding international agreements, or formal unilateral declarations, to prohibit the integration of Artificial Intelligence/Machine Learning systems into nuclear launch authority and critical command, control, and communications (NC3) decision-making loops without a time-assured, human-in-the-loop override mechanism. 2. Mandate the use of Explainable AI (XAI) principles for all strategically relevant military decision-support systems to ensure human operators can comprehend model rationale and provenance, thereby mitigating opaque recommendations that could compress decision timelines and bias escalatory actions. 3. Implement rigorous, continuous red-teaming and scenario-based training for operators of strategic systems that requires critical function performance without AI assistance, thereby counteracting agency decay and ensuring cognitive autonomy in high-stakes, uncertain environments.

ADDITIONAL EVIDENCE

One structural risk that is evaluated less frequently is the potential for automated systems to upend the stability of strategic weapons systems through the erosion of confidence. For example, alterations to behavioral regimes, such as nuclear rapprochement, can compromise trust and increase uncertainty