Back to the MIT repository
5. Human-Computer Interaction3 - Other

Overreliance

As AI capability increases, humans grant AI more control over critical systems and eventually become irreversibly dependent on systems they don’t fully understand. Failure and unintended outcomes cannot be controlled.

Source: MIT AI Risk Repositorymit921

ENTITY

1 - Human

INTENT

2 - Unintentional

TIMING

3 - Other

Risk ID

mit921

Domain lineage

5. Human-Computer Interaction

92 mapped risks

5.2 > Loss of human agency and autonomy

Mitigation strategy

1. Implement Human-In-The-Loop (HITL) Governance: Integrate mandatory human oversight at critical decision points—especially in high-stakes systems—to prevent the irreversible loss of human agency and ensure that decisions are subject to ethical judgment and contextual awareness. 2. Cultivate Appropriate User Mental Models and System Transparency: Employ user experience (UX) strategies to clearly communicate the AI's capabilities, limitations, and decision-making workings (e.g., through disclaimers or uncertainty expressions), thereby facilitating calibrated reliance and providing the necessary information to spot and mitigate system errors. 3. Design for Active Verification and Skill Maintenance: Utilize cognitive forcing functions, such as confirmation dialogues or AI critiques, and provide ongoing training to actively encourage users to verify AI outputs and engage in independent critical thinking, counteracting automation complacency, cognitive offloading, and the erosion of human skills.