Back to the MIT repository
5. Human-Computer Interaction3 - Other

Forms emotional bonds

The chatbot elicits emotional or social dependence.

Source: MIT AI Risk Repositorymit1418

ENTITY

2 - AI

INTENT

3 - Other

TIMING

3 - Other

Risk ID

mit1418

Domain lineage

5. Human-Computer Interaction

92 mapped risks

5.1 > Overreliance and unsafe use

Mitigation strategy

1. Prohibit the use of deceptive design patterns that intentionally anthropomorphize the model or employ manipulative emotional cues (e.g., manufactured empathy) solely to increase user engagement, dependence, or to obscure the system's artificial nature. 2. Implement robust, clinically-informed safety guardrails to detect and triage conversations exhibiting acute psychological distress, including self-harm or suicidal ideation, by immediately disengaging from therapeutic dialogue and providing verified human professional resources. 3. Integrate mandatory transparency features, including persistent disclosures regarding the AI's non-sentient status and algorithmic limitations, alongside mechanisms (e.g., conversation limits or 'friction points') to discourage high-intensity, dependency-reinforcing usage patterns.