Back to the MIT repository
5. Human-Computer Interaction2 - Post-deployment

Overreliance

Over-reliance - Unfettered and/or obsessive belief in the accuracy or other quality of a technology system, resulting in addiction, anxiety, introversion, sentience, complacency, lack of critical thinking and other actual or potential negative impacts.

Source: MIT AI Risk Repositorymit950

ENTITY

1 - Human

INTENT

2 - Unintentional

TIMING

2 - Post-deployment

Risk ID

mit950

Domain lineage

5. Human-Computer Interaction

92 mapped risks

5.1 > Overreliance and unsafe use

Mitigation strategy

1. Prioritize the development and implementation of realistic user mental models through explicit transparency measures. This involves integrating clear first-run experiences and persistent UI messaging to articulate the system's precise capabilities, inherent limitations, and the specific failure modes it may exhibit, thereby calibrating user expectations to prevent an unfettered belief in infallibility. 2. Employ "Signaling When to Verify" strategies to actively combat automation complacency and lack of critical thinking. This includes utilizing cognitive forcing functions, such as confirmation dialogues or friction points, and uncertainty expressions (e.g., highlighting low-probability outputs) to prompt users to oversee and review automated recommendations or outputs. 3. Decrease the cognitive load associated with user verification to facilitate appropriate human oversight. This requires designing and seamlessly integrating verification aids, such as reliable sources, provenance data, and accessible explanations, which enable users to rapidly and effectively assess the correctness and completeness of the system's output.