Over-reliance
The apparent convenience and powerfulness of ChatGPT could result in overreliance by its users, making them trust the answers provided by ChatGPT. Compared with traditional search engines that provide multiple information sources for users to make personal judgments and selections, ChatGPT generates specific answers for each prompt. Although utilizing ChatGPT has the advantage of increasing efficiency by saving time and effort, users could get into the habit of adopting the answers without rationalization or verification. Over-reliance on generative AI technology can impede skills such as creativity, critical thinking, and problem-solving (Iskender, 2023) as well as create human automation bias due to habitual acceptance of generative AI recommendations (Van Dis et al., 2023)
ENTITY
3 - Other
INTENT
2 - Unintentional
TIMING
3 - Other
Risk ID
mit536
Domain lineage
5. Human-Computer Interaction
5.2 > Loss of human agency and autonomy
Mitigation strategy
1. Mandate UI Transparency and Facilitate Verification Implement user interface (UI) mechanisms and *cognitive forcing functions* to cultivate *realistic mental models* of the AI system's capabilities and limitations. This includes explicitly labeling AI-generated content and integrating easily discoverable *verification aids*, such as source lineage or *uncertainty expressions*, to *signal to users when to verify* outputs and reduce the *cognitive load* associated with critical review, thereby fostering appropriate reliance (Sources 8, 9). 2. Establish Formal Human-in-the-Loop Governance Institute mandatory *human-in-the-loop* (HITL) processes with defined *checkpoints* that require a subject matter expert (SME) or editor to critically review and validate AI-generated content before it is published or deployed. This structured *human oversight* ensures errors and potential biases are systematically caught, effectively mitigating the organizational risk associated with *automation bias* and the uncritical acceptance of system outputs (Sources 3, 9, 12). 3. Implement Comprehensive AI Literacy and Critical Thinking Training Develop and deploy compulsory educational programs to build *AI literacy* across all user groups. This training must explicitly cover the probabilistic and non-deterministic nature of Generative AI, the mechanisms by which it can *hallucinate*, and the documented cognitive risks of *over-reliance*, such as the impairment of *critical thinking* and *problem-solving* skills. The curriculum should focus on promoting user accountability and empowering personnel to exercise independent judgment (Sources 3, 5, 6, 13).