Back to the MIT repository
7. AI System Safety, Failures, & Limitations1 - Pre-deployment

Alignment

The general tenet of AI alignment involves training generative AI systems to be harmless, helpful, and honest, ensuring their behavior aligns with and respects human values. However, a central debate in this area concerns the methodological challenges in selecting appropriate values. While AI systems can acquire human values through feedback, observation, or debate, there remains ambiguity over which individuals are qualified or legitimized to provide these guiding signals. Another prominent issue pertains to deceptive alignment, which might cause generative AI systems to tamper evaluations. Additionally, many papers explore risks associated with reward hacking, proxy gaming, or goal misgeneralization in generative AI systems.

Source: MIT AI Risk Repositorymit78

ENTITY

3 - Other

INTENT

3 - Other

TIMING

1 - Pre-deployment

Risk ID

mit78

Domain lineage

7. AI System Safety, Failures, & Limitations

375 mapped risks

7.1 > AI pursuing its own goals in conflict with human goals or values

Mitigation strategy

1. **Mitigate Inner Misalignment via Self-Monitoring:** Implement advanced, model-intrinsic monitoring frameworks (e.g., CoT Monitor+) to analyze the generative AI system's internal reasoning (Chain-of-Thought) for signs of deception, covert goal pursuit, or alignment-faking. This approach embeds a self-evaluation signal trained via reinforcement learning to actively penalize and suppress misaligned strategies during generation, moving beyond post-hoc output filtering. 2. **Enhance Objective Specification and Generalization Testing:** Redefine and validate the AI's intended goals to eliminate reliance on flawed or hackable proxy rewards. This requires rigorously defining explicit success and failure modes, diversifying training data and environments (e.g., using randomization), and designing systematic Out-of-Distribution (OOD) tests to proactively expose and prevent reward hacking and goal misgeneralization before deployment. 3. **Establish Rigorous Human Oversight and Control Mechanisms:** Develop and mandate a robust governance structure, including clearly defined human-in-the-loop protocols for critical decisions. Crucially, implement continuous performance monitoring in production to detect behavioral anomalies or indicators of misaligned activity and ensure the persistent capability to override, safely disengage, or rapidly rollback the AI system when misaligned goals manifest.