Back to the MIT repository
5. Human-Computer Interaction2 - Post-deployment

Over- or under-reliance

In AI-assisted decision-making tasks, reliance measures how much a person trusts (and potentially acts on) a model’s output. Over-reliance occurs when a person puts too much trust in a model, accepting a model’s output when the model’s output is likely incorrect. Under-reliance is the opposite, where the person doesn’t trust the model but should.

Source: MIT AI Risk Repositorymit1306

ENTITY

1 - Human

INTENT

2 - Unintentional

TIMING

2 - Post-deployment

Risk ID

mit1306

Domain lineage

5. Human-Computer Interaction

92 mapped risks

5.1 > Overreliance and unsafe use

Mitigation strategy

- Integrate **Cognitive Forcing Functions (CFFs)** and **uncertainty expressions** within the user interface to disrupt automatic acceptance of AI outputs, thereby compelling active, analytical verification, particularly in high-stakes decision contexts - Mandate **Human-in-the-Loop (HITL) protocols** for all critical decision-making processes, supported by rigorous monitoring and evaluation frameworks to ensure consistent human oversight and prompt detection of errors or biases in AI recommendations - Implement continuous, role-specific **AI literacy and critical skill development training** for all users to calibrate trust appropriately, mitigate cognitive biases (such as automation bias), and maintain the essential cognitive skills necessary for effective human-AI team performance