Back to the MIT repository
5. Human-Computer Interaction2 - Post-deployment

Human choice of overreliance in critical sectors

Heavy reliance on AI in critical sectors like finance or healthcare can exacerbate issues related to size, speed, interconnectivity, and complexity of the system.

Source: MIT AI Risk Repositorymit1071

ENTITY

1 - Human

INTENT

2 - Unintentional

TIMING

2 - Post-deployment

Risk ID

mit1071

Domain lineage

5. Human-Computer Interaction

92 mapped risks

5.1 > Overreliance and unsafe use

Mitigation strategy

1. Implement mandatory structured Human-in-the-Loop (HITL) processes with Cognitive Forcing Functions for critical AI outputs. These mechanisms (e.g., required input verification, confirmation dialogues, or intentional friction) must be designed to ensure active human review rather than passive acceptance, thereby directly counteracting automation bias and maintaining the human's role in clinical or financial decision-making. 2. Establish Domain-Specific AI Literacy and Trust Calibration Training Programs for all human operators. Curricula must be continually updated to address the AI system's specific failure modes, uncertainty expressions, and data biases, ensuring that users can realistically assess the AI's capabilities and appropriately calibrate their reliance to prevent cognitive deskilling. 3. Deploy Continuous Post-Deployment Monitoring and Systemic Adverse Event Reporting protocols. This necessitates tracking key metrics of overreliance (e.g., acceptance rate of incorrect AI recommendations) and establishing a clear, non-punitive process for reporting AI-related errors and near-misses to identify emerging systemic risks and model performance degradation over time.