Back to the MIT repository
7. AI System Safety, Failures, & Limitations2 - Post-deployment

Destabilising Dynamics

Destabilising dynamics (Section 3.4): systems that adapt in response to one another can produce dangerous feedback loops and unpredictability;

Source: MIT AI Risk Repositorymit1229

ENTITY

2 - AI

INTENT

2 - Unintentional

TIMING

2 - Post-deployment

Risk ID

mit1229

Domain lineage

7. AI System Safety, Failures, & Limitations

375 mapped risks

7.6 > Multi-agent risks

Mitigation strategy

1. Establish a multi-layered verification system that independently monitors reasoning processes, validates actions taken against expected behaviors, and employs continuous red team/blue team adversarial testing to expose and mitigate co-evolving deception and behavioral drift. 2. Conduct comprehensive chain-level simulation and systemic stress testing prior to deployment, focusing on modeling and tracing failure cascades and coordination breakdown rates under conditions of conflicting inputs or high-throughput load. 3. Implement a fail-safe architecture by decoupling core AI decision-making from deterministic safety functions, ensuring runtime enforcement of safety invariants, and mandating a graceful degradation into pre-defined conservative fallback modes upon detection of systemic anomaly.