Back to the MIT repository
5. Human-Computer Interaction2 - Post-deployment

Increased vulnerability to misinformation

Advanced AI assistants may make users more susceptible to misinformation, as people develop competence trust in these systems’ abilities and uncritically turn to them as reliable sources of information.

Source: MIT AI Risk Repositorymit433

ENTITY

2 - AI

INTENT

2 - Unintentional

TIMING

2 - Post-deployment

Risk ID

mit433

Domain lineage

5. Human-Computer Interaction

92 mapped risks

5.1 > Overreliance and unsafe use

Mitigation strategy

1. Establish mandatory human oversight and verification protocols for all high-stakes or sensitive AI-generated content, explicitly requiring human-in-the-loop processes to cross-verify outputs against established external sources to mitigate automation bias and overreliance. 2. Develop and implement comprehensive AI literacy and training initiatives designed to educate users on the system's probabilistic nature, inherent limitations, and the critical importance of independent judgment, thereby fostering appropriate reliance rather than blind competence trust. 3. Integrate advanced technical and User Experience (UX) safeguards, including Retrieval-Augmented Generation (RAG) to ground facts in verified data, and transparency mechanisms such as uncertainty expressions and contextual explanations, to enable users to effectively assess and verify the reliability of the AI's output.