Back to the MIT repository
5. Human-Computer Interaction2 - Post-deployment

Overreliance

Causing people to become emotionally or materially dependent on the model

Source: MIT AI Risk Repositorymit277

ENTITY

1 - Human

INTENT

2 - Unintentional

TIMING

2 - Post-deployment

Risk ID

mit277

Domain lineage

5. Human-Computer Interaction

92 mapped risks

5.1 > Overreliance and unsafe use

Mitigation strategy

1. Prioritize Model Calibration and Uncertainty Expression: Modify model training processes, such as reward functions, to accurately reflect and communicate epistemic uncertainty (e.g., confidence scores or natural language markers) to prevent users from overestimating system reliability and competence. 2. Implement Cognitive Forcing Functions and Verification Cues: Design system interfaces to incorporate friction mechanisms (e.g., extra clicks, confirmation dialogues) and priming statements that clarify system limitations, thereby interrupting automation bias and promoting user verification at critical decision points. 3. Establish Human-in-the-Loop Oversight and Continuous Learning Protocols: Mandate human oversight for critical AI-driven decisions (Human-in-the-Loop systems) and institute mandatory professional development to maintain and enhance fundamental cognitive and domain-specific skills to counteract skill atrophy.

ADDITIONAL EVIDENCE

Example: Skill atrophy (e.g. decreased critical thinking skills) from excessive model use (Bai et al., 2023b)±