Back to the MIT repository
5. Human-Computer Interaction2 - Post-deployment

Influence, overreliance and dependence (overreliance)

Beyond being simply influenced, humans may become overreliant on generative AI. Researchers with Microsoft’s AETHER (AI Ethics and Effects in Engineering and Research) define overreliance as users “accepting incorrect AI recommendations” or “making errors of commission” because they are “unable to determine whether or how much they should trust the AI.”

Source: MIT AI Risk Repositorymit740

ENTITY

1 - Human

INTENT

2 - Unintentional

TIMING

2 - Post-deployment

Risk ID

mit740

Domain lineage

5. Human-Computer Interaction

92 mapped risks

5.1 > Overreliance and unsafe use

Mitigation strategy

1. Prioritize User Experience (UX) for Appropriate Reliance: Implement design strategies focused on fostering appropriate reliance, specifically by creating realistic user mental models. This requires continuous transparency through clear, discoverable UI messaging detailing the generative AI system's capabilities, limitations (e.g., propensity for inaccuracies or "hallucinations"), and fundamental workings to establish a calibrated understanding of its trustworthiness. 2. Integrate Cognitive Forcing Functions and Verification Aids: Design system interfaces to actively mitigate automation complacency by integrating 'signal to verify' mechanisms. This includes employing cognitive forcing functions (e.g., mandatory confirmation steps, friction) and displaying uncertainty expressions in high-stakes outputs. Furthermore, facilitate the verification process by providing easily accessible, robust, and reliable grounding data or source citations to decrease the user's cognitive load when validating AI-generated content. 3. Mandate Comprehensive AI Literacy and Critical Assessment Training: Develop and implement standardized educational programs focused on AI literacy for all user cohorts. This training must emphasize the probabilistic nature of generative AI, the necessity of critical assessment of outputs, and the ethical use of the system to ensure the technology is employed as an augmentative tool rather than a substitute for human expertise and judgment.