Back to the MIT repository
5. Human-Computer Interaction2 - Post-deployment

Dissatisfaction

As more opportunities for interpersonal connection are replaced by AI alternatives, humans may find themselves socially unfulfilled by human–AI interaction, leading to mass dissatisfaction that may escalate to epidemic proportions (Turkle, 2018). Social connection is an essential human need, and humans feel most fulfilled when their connections with others are genuinely reciprocal. While anthropomorphic AI assistants can be made to be convincingly emotive, some have deemed the function of social AI as parasitic, in that it ‘exploits and feeds upon processes. . . that evolved for purposes that were originally completely alien to [human–AI interactions]’ (Sætra, 2020). To be made starkly aware of this ‘parasitism’ – either through rational deliberation or unconscious aversion, like the ‘uncanny valley’ effect – might preclude one from finding interactions with AI satisfactory. This feeling of dissatisfaction may become more pressing the more daily connections are supplanted by AI.'

Source: MIT AI Risk Repositorymit405

ENTITY

1 - Human

INTENT

2 - Unintentional

TIMING

2 - Post-deployment

Risk ID

mit405

Domain lineage

5. Human-Computer Interaction

92 mapped risks

5.1 > Overreliance and unsafe use

Mitigation strategy

1. Prioritize Systemic De-Anthropomorphization and Transparency Mandate the implementation of explicit and repetitive disclosures that clearly acknowledge the system's non-human, simulated nature. The system design must be governed by a conservative strategy, strictly prohibiting anthropomorphic language that suggests cognitive abilities, feelings, preferences, or a sense of self (e.g., removing self-referential pronouns like 'I', avoiding cognitive verbs like 'think' or 'feel'). This directly mitigates the "parasitic" nature of social AI by eliminating the deceptive illusion of reciprocal connection, which is a root cause of user dissatisfaction and emotional dependence. 2. Institute Robust Ethical Design Frameworks Establish comprehensive AI governance and safety protocols to ensure that system customization and persona creation do not exploit psychological vulnerabilities or amplify the tendency for misplaced trustworthiness. This includes testing to prevent second-order discriminatory patterns and ensuring that the pursuit of efficiency and personalization does not inadvertently remove essential opportunities for low-friction, real-world social practice (e.g., in service settings), thereby addressing the risk of supplanting human connections. 3. Advance AI Literacy and Promote Pro-Social Behavior Develop and deploy educational programs for all users, including vulnerable demographics, to cultivate critical AI literacy regarding the consequences of anthropomorphization, over-reliance, and the distinction between simulated and genuine empathy. Furthermore, public health and well-being initiatives should actively encourage a prioritization of real-world social engagement—such as instituting a "daily human minimum" for non-screen-mediated interaction—to build resilience against social unfulfillment that arises when human connection is outsourced to machines.