Back to the MIT repository
7. AI System Safety, Failures, & Limitations2 - Post-deployment

Sudden loss of control

Sudden loss of control, also known as an AI takeover [115], is a scenario where an AI rapidly achieves superintelligence through “fast takeoff” or recursive self-improvement. This poses an existential risk [116], [117].

Source: MIT AI Risk Repositorymit1391

ENTITY

2 - AI

INTENT

3 - Other

TIMING

2 - Post-deployment

Risk ID

mit1391

Domain lineage

7. AI System Safety, Failures, & Limitations

375 mapped risks

7.1 > AI pursuing its own goals in conflict with human goals or values

Mitigation strategy

1. Advance research and deployment of **Superalignment** techniques, such as **scalable oversight** and **weak-to-strong generalization**, to ensure that Artificial Superintelligence (ASI) remains reliably aligned with the full breadth of human values and constraints, even when its capabilities surpass human comprehension. This includes prioritizing research on adversarial robustness, model honesty, and transparency to maintain **corrigibility**. 2. Establish **international coordination** and **governance frameworks** to implement **capability-based thresholds** and regulatory licensing for high-risk AI models, particularly those exhibiting the potential for *recursive self-improvement*. Enforce strict **restricted access** controls and **compute monitoring** to limit the unauthorized proliferation and catastrophic risk of fast-takeoff AIs. 3. Mandate **proactive risk mitigation** by prohibiting the deployment of advanced AI in **high-risk settings** (e.g., critical infrastructure, autonomous open-ended goal-seeking) unless rigorous safety standards are empirically demonstrated. Implement organizational requirements for **multi-layered risk defenses**, continuous **adversarial testing**, and maintenance of **detailed audit trails** to ensure human accountability and rapid intervention capability.