Back to the MIT repository
5. Human-Computer Interaction2 - Post-deployment

Automation bias

The tendency for humans to over-rely on AI models and systems, trusting their outputs without sufficient critical evaluation, which can lead to poor decision-making.

Source: MIT AI Risk Repositorymit1052

ENTITY

1 - Human

INTENT

2 - Unintentional

TIMING

2 - Post-deployment

Risk ID

mit1052

Domain lineage

5. Human-Computer Interaction

92 mapped risks

5.1 > Overreliance and unsafe use

Mitigation strategy

1. Implement Human-Centric System Design: Introduce transparency and friction mechanisms directly into the system's user experience (UX) to promote critical evaluation. This involves providing clear confidence scores or uncertainty expressions with AI recommendations, offering sufficient contextual information for verification (e.g., sources/explanations), and deploying cognitive forcing functions to mandate human review before critical decisions are finalized. 2. Mandate Comprehensive AI Literacy and Critical Engagement Training: Establish rigorous, domain-specific training programs for all users. The curriculum must detail the AI system's specific capabilities, inherent limitations, and potential failure modes (e.g., hallucinations or bias). A core focus must be on cultivating user discernment and reinforcing accountability for the critical evaluation of automated outputs rather than passive acceptance. 3. Define Clear Operational Governance and System Boundaries: Establish a formal governance framework that precisely defines the operational limits (the scope and constraints) for AI use within the organization. This framework must clearly delineate decisions that require mandatory human-in-the-loop oversight and assign explicit accountability for errors that result from over-reliance or system misuse.