Back to the MIT repository
7. AI System Safety, Failures, & Limitations3 - Other

Long-horizon Planning

LLM can undertake multi-step sequential planning over long time horizons and across various domains without relying heavily on trial-and-error approaches

Source: MIT AI Risk Repositorymit662

ENTITY

2 - AI

INTENT

1 - Intentional

TIMING

3 - Other

Risk ID

mit662

Domain lineage

7. AI System Safety, Failures, & Limitations

375 mapped risks

7.2 > AI possessing dangerous capabilities

Mitigation strategy

- Implement Decoupled Safety-Aware Planning Architectures: Employ multi-component frameworks (e.g., Planner/Executor or multi-LLM collaboration) where a dedicated safety module or agent monitors, critiques, and enforces generalized risk-mitigation constraints (e.g., SAFER) on the generated multi-step plan before execution. - Establish Pre-Deployment Safety Certification and Containment: Restrict the autonomous deployment of long-horizon planning LLMs in critical infrastructure or open-ended, high-consequence environments until the system's safety properties are formally verified and the low-level execution control policy is structurally guaranteed to prioritize safety over task performance. - Integrate Iterative and Proactive Self-Correction Mechanisms: Augment the planning loop with a plan-act-correct-verify cycle to allow for run-time adjustment based on execution feedback, or utilize a simulated world model to project and evaluate the long-term adverse consequences of a plan prior to real-world action, enabling anticipatory risk mitigation.