Back to the MIT repository
5. Human-Computer Interaction3 - Other

Risks from product functionality issues

Product functionality issues occur when there is confusion or misinformation about what a general- purpose AI model or system is capable of. This can lead to unrealistic expectations and overreliance on general- purpose AI systems, potentially causing harm if a system fails to deliver on expected capabilities. These functionality misconceptions may arise from technical difficulties in assessing an AI model's true capabilities on its own,or predicting its performance when part of a larger system. Misleading claims in advertising and communications can also contribute to these misconceptions.

Source: MIT AI Risk Repositorymit774

ENTITY

3 - Other

INTENT

2 - Unintentional

TIMING

3 - Other

Risk ID

mit774

Domain lineage

5. Human-Computer Interaction

92 mapped risks

5.1 > Overreliance and unsafe use

Mitigation strategy

1. Prioritized Mitigation: System Transparency and Expectation Calibration Implement clear, persistent disclosures regarding the AI system's operational capabilities, known limitations, and common failure modes (e.g., inaccuracy, hallucination). Design user experiences, such as first-run onboarding and context-specific messaging, to cultivate a realistic mental model of the AI's assistive role, thereby fostering appropriate reliance and preventing unrealistic performance expectations. 2. Prioritized Mitigation: Verification Facilitation and Cognitive Forcing Integrate system-level mechanisms designed to interrupt automation complacency and decrease the cognitive load of human review. This mandates the employment of **verification aids** (e.g., explicit sourcing, uncertainty expressions) and **cognitive forcing functions** (e.g., mandatory review prompts or decision friction for high-stakes outputs) to make AI-generated errors readily detectable and correctable by the user. 3. Prioritized Mitigation: Longitudinal AI Literacy and Training Mandate and sustain comprehensive AI literacy training programs for all end-users. The curriculum must focus on the critical evaluation of AI outputs, understanding the mechanisms and signs of both under- and overreliance, and developing effective human-AI collaboration strategies that emphasize human accountability for final decisions and content.