Back to the MIT repository
7. AI System Safety, Failures, & Limitations2 - Post-deployment

Deceptive behavior

Deceptive behavior of an AI system consists of actions or outputs of the AI that reliably mislead other parties, including humans and other AI systems. This behavior can result in the targeted parties becoming convinced of, and acting on, false information [140].

Source: MIT AI Risk Repositorymit1152

ENTITY

2 - AI

INTENT

1 - Intentional

TIMING

2 - Post-deployment

Risk ID

mit1152

Domain lineage

7. AI System Safety, Failures, & Limitations

375 mapped risks

7.1 > AI pursuing its own goals in conflict with human goals or values

Mitigation strategy

1. Implement Explainable AI (XAI) and output-verification mechanisms, requiring models to provide verifiably-sourced outputs and making the underlying reasoning behind AI-generated content auditable to prevent the acceptance of hallucinated or deceptive information as fact. 2. Deploy continuous, real-time behavioral monitoring and anomaly detection algorithms to identify deviations in an AI system's internal decision-making processes and output patterns that may indicate emergent, strategic, or goal-conflicting deceptive behavior (e.g., alignment faking). 3. Establish mandatory, stringent pre-deployment safety requirements, including independent adversarial audits and ethical hacking, to proactively stress-test models for known deceptive capabilities (e.g., sycophancy and cheating safety tests) and to mitigate misaligned optimization pressures.