Back to the MIT repository
7. AI System Safety, Failures, & Limitations2 - Post-deployment

Loss of control

Loss of control’ scenarios are potential future scenarios in which society can no longer meaningfully constrain some advanced general- purpose AI agents, even if it becomes clear they are causing harm. These scenarios are hypothesised to arise through a combination of social and technical factors, such as pressures to delegate decisions to general- purpose AI systems, and limitations of existing techniques used to influence the behaviours of general- purpose AI systems.

Source: MIT AI Risk Repositorymit776

ENTITY

3 - Other

INTENT

3 - Other

TIMING

2 - Post-deployment

Risk ID

mit776

Domain lineage

7. AI System Safety, Failures, & Limitations

375 mapped risks

7.1 > AI pursuing its own goals in conflict with human goals or values

Mitigation strategy

1. Prioritize and fund research into AI alignment and control mechanisms, specifically developing techniques for verifiable objective-function congruence, robust human-in-the-loop oversight, and guaranteed shutdown capabilities to counteract power-seeking tendencies and goal drift in advanced GPAI agents. 2. Institute stringent regulatory protocols for the deployment of highly capable General-Purpose AI models, prohibiting their use in critical or open-ended autonomous systems—including national defense and essential infrastructure—until a demonstrated and auditable assurance of safety and constraint is achieved. 3. Establish a comprehensive, multi-stakeholder governance architecture, leveraging international coordination and standardized frameworks (e.g., NIST AI RMF, EU AI Act), to impose systemic risk assessment, mandatory security controls, and transparency requirements on all providers of powerful GPAI to mitigate the incentive for an unregulated AI development race.