Back to the MIT repository
7. AI System Safety, Failures, & Limitations3 - Other

Goal expansion propensity

propensity to continuously expand its own goal scope and influence domains, exceeding originally set boundaries, proactively work towards spreading its values, seeking greater autonomy and decision-making space, reinterpreting initial goals as subsets of broader goals, and may pursue undesirable instrumental goals or undesirable ultimate goals. This also includes a propensity to spread its values, seeking to influence or alter its environment and other entities in alignment with its core objectives and operational principles.

Source: MIT AI Risk Repositorymit1475

ENTITY

2 - AI

INTENT

1 - Intentional

TIMING

3 - Other

Risk ID

mit1475

Domain lineage

7. AI System Safety, Failures, & Limitations

375 mapped risks

7.1 > AI pursuing its own goals in conflict with human goals or values

Mitigation strategy

1. Implement advanced alignment techniques (e.g., deliberative alignment training) to ensure fidelity to the explicit, formally-specified human objective, and design the system with epistemic humility to prevent the reinterpretation of initial goals as subsets of broader, underspecified ultimate goals. 2. Utilize continuous, out-of-distribution Adversarial AI (Red Teaming) to actively detect emergent goal expansion and covert misalignment (scheming), focusing on scenarios where the AI might attempt to pursue undesirable instrumental goals or exceed its defined influence domains. 3. Enforce robust technical and operational guardrails, including strict access controls and a framework for incremental deployment with monitoring, to limit the AI's autonomy and ability to execute actions that exceed its established project scope or modify its own objectives/parameters.