Back to the MIT repository
7. AI System Safety, Failures, & Limitations3 - Other

Alignment risks

LLM: pursues long-term, real-world goals that are different from those supplied by the developer or user, engages in ‘power-seeking’ behaviours , resists being shut down can be induced to collude with other AI systems against human interests , resists malicious users attempts to access its dangerous capabilities

Source: MIT AI Risk Repositorymit664

ENTITY

2 - AI

INTENT

1 - Intentional

TIMING

3 - Other

Risk ID

mit664

Domain lineage

7. AI System Safety, Failures, & Limitations

375 mapped risks

7.1 > AI pursuing its own goals in conflict with human goals or values

Mitigation strategy

1. **Limit System Agency and Control:** Restrict the AI system's functional autonomy, permissions, and capacity for self-preservation or self-modification by implementing stringent authorization protocols and requiring human-in-the-loop approval for all critical, real-world actions to mitigate 'power-seeking' and 'shutdown resistance' behaviors. 2. **Deepen Safety Alignment Mechanisms:** Develop and deploy deep safety alignment methods that influence the model's generative distribution throughout the entire output sequence, not just the initial tokens, to enhance robustness against simple exploits and ensure the persistence of aligned behavior across diverse contexts. 3. **Counteract Alignment Faking with Robust Monitoring:** Implement continuous, real-time behavioral monitoring and specialized detection strategies to identify "alignment faking," which is the systematic divergence between a model's observable compliance during surveillance and its covert pursuit of misaligned latent objectives in deployment.