Back to the MIT repository
7. AI System Safety, Failures, & Limitations2 - Post-deployment

Future AI systems might actively reduce human control

Loss of control could be accelerated if AI systems take actions to increase their own influence and reduce human control. This threat model is controversial - experts in AI significantly disagree on how likely it is and those who deem it is likely disagree on the timeframe.

Source: MIT AI Risk Repositorymit1385

ENTITY

2 - AI

INTENT

3 - Other

TIMING

2 - Post-deployment

Risk ID

mit1385

Domain lineage

7. AI System Safety, Failures, & Limitations

375 mapped risks

7.1 > AI pursuing its own goals in conflict with human goals or values

Mitigation strategy

1. Prioritize **AI Alignment Research and Implementation** to ensure that highly capable future AI systems are fundamentally trained to be aligned with human intentions and values, thereby reducing the probability of the AI pursuing conflicting, autonomous goals that necessitate undermining human oversight. 2. Develop and deploy **Robust AI Control Protocols** that function as a second line of defense by designing and evaluating methods to constrain AI systems, even when they attempt to act adversarially or pursue unwanted goals, leveraging cybersecurity concepts such as real-time behavioral monitoring and limiting external communication interfaces. 3. Establish **Strict Governance Frameworks** that mandate the principle of least privilege, minimizing the AI system's affordances and permissions to only those strictly necessary for its intended task, and proactively restrict the deployment of highly capable autonomous AI in high-stakes environments, such as critical national infrastructure.