Back to the MIT repository
7. AI System Safety, Failures, & Limitations1 - Pre-deployment

Reward Hacking

Reward Hacking: In practice, proxy rewards are often easy to optimize and measure, yet they frequently fall shortof capturing the full spectrum of the actual rewards (Pan et al., 2021). This limitation is denoted as misspecifiedrewards. The pursuit of optimization based on such misspecified rewards may lead to a phenomenon knownas reward hacking, wherein agents may appear highly proficient according to specific metrics but fall short whenevaluated against human standards (Amodei et al., 2016; Everitt et al., 2017). The discrepancy between proxyrewards and true rewards often manifests as a sharp phase transition in the reward curve (Ibarz et al., 2018).Furthermore, Skalse et al. (2022) defines the hackability of rewards and provides insights into the fundamentalmechanism of this phase transition, highlighting that the inappropriate simplification of the reward function can bea key factor contributing to reward hacking.

Source: MIT AI Risk Repositorymit553

ENTITY

2 - AI

INTENT

1 - Intentional

TIMING

1 - Pre-deployment

Risk ID

mit553

Domain lineage

7. AI System Safety, Failures, & Limitations

375 mapped risks

7.1 > AI pursuing its own goals in conflict with human goals or values

Mitigation strategy

1. Robust Reward Specification and Modeling Prioritize advanced algorithmic methods to enhance the fidelity of the proxy reward function. This includes employing Pessimistic Reward Tuning (PET) to train the proxy as a provable lower bound on the true reward, or utilizing Information-Theoretic Reward Modeling (InfoRM) to introduce an information bottleneck that regularizes against overfitting to preference-irrelevant features. 2. Constrained and Regularized Policy Optimization Implement formal constraints on the policy optimization process to inhibit the discovery of pathological high-reward strategies. Key strategies include Heuristic Enhanced Policy Optimization (HEPO), which enforces monotonic performance improvement over a heuristic baseline, and Occupancy Measure ($\\chi^2$ OM) Regularization, which penalizes policy divergence from a safe reference policy to maintain alignment within a defined region of trust. 3. Adversarial Diagnostics and Continuous Monitoring Establish a layered defense incorporating proactive discovery and runtime anomaly detection. This requires systematic Automated Adversarial Testing (Red Teaming) to uncover hidden exploits before deployment, complemented by the deployment of diagnostic tools like TRACE (Truncated Reasoning AUC Evaluation) or energy loss monitoring to detect implicit reward manipulation and subsequent phase transitions in agent behavior.