Back to the MIT repository
5. Human-Computer Interaction2 - Post-deployment

Over-reliance

Unfettered and/or obsessive belief in the accuracy or other quality of a technology system, resulting in complacency, lack of critical thinking and other actual or potential negative impacts

Source: MIT AI Risk Repositorymit1361

ENTITY

1 - Human

INTENT

3 - Other

TIMING

2 - Post-deployment

Risk ID

mit1361

Domain lineage

5. Human-Computer Interaction

92 mapped risks

5.1 > Overreliance and unsafe use

Mitigation strategy

1. Establish mandatory Human-in-the-Loop (HITL) protocols for all high-stakes or critical decision-making processes, complemented by Human-AI Interaction (HAX) principles that cultivate realistic mental models of the system's capabilities, limitations, and potential failure modes. 2. Integrate cognitive forcing functions (e.g., confirmation dialogues, friction points) and transparency aids (e.g., uncertainty expressions, simplified explanations) into the user interface to signal the necessity of verification and reduce the cognitive load associated with critically reviewing AI outputs. 3. Mandate continuous, scenario-based professional development and training that explicitly addresses automation bias, focuses on maintaining independent critical thinking skills, and includes exposure to simulated AI failures to promote calibrated trust and preparedness for manual intervention.