Back to the MIT repository
5. Human-Computer Interaction2 - Post-deployment

Degradation

People may choose to build connections with human-like AI assistants over other humans, leading to a degradation of social connections between humans and a potential ‘retreat from the real’. The prevailing view that relationships with anthropomorphic AI are formed out of necessity – due to a lack of real-life social connections, for example (Skjuve et al., 2021) – is challenged by the possibility that users may indicate a preference for interactions with AI, citing factors such as accessibility (Merrill et al., 2022), customisability (Eriksson, 2022) and absence of judgement (Brandtzaeg et al., 2022).Preference for AI-enabled connections, if widespread, may degrade the social connectedness that underpins critical aspects of our individual and group-level well-being (Centers for Disease Control and Prevention, 2023). Moreover, users that grow accustomed to interactions with AI may impose the conventions of human–AI interaction on exchanges with other humans, thus undermining the value we place on human individuality and self-expression (see Chapter 11). Similarly, associations reinforced through human–AI interactions may be applied to expectations of human others, leading to harmful stereotypes becoming further entrenched. For example, default female gendered voice assistants may reinforce stereotypical role associations in real life (Lingel and Crawford, 2020; West et al., 2019).

Source: MIT AI Risk Repositorymit403

ENTITY

1 - Human

INTENT

3 - Other

TIMING

2 - Post-deployment

Risk ID

mit403

Domain lineage

5. Human-Computer Interaction

92 mapped risks

5.1 > Overreliance and unsafe use

Mitigation strategy

1. Implement rigorous linguistic and behavioral guardrails to minimize anthropomorphic cues. This includes eliminating first-person pronouns, avoiding cognitive and affective verbs (e.g., 'think,' 'feel'), and systematically using neutral, mechanistic terminology to describe system functions and outputs to prevent users from falsely attributing consciousness or personal identity 2. Enforce explicit, mandatory, and repeated disclosure of the system's non-human nature at the beginning of interactions and during sensitive exchanges. The system must clearly acknowledge its limitations and lack of genuine human qualities (e.g., consciousness, emotion, intent) to ensure users maintain a clear understanding of the transactional, non-social nature of the interaction 3. Design and promote AI applications that function primarily as facilitators for real-world human-to-human connection and social skill development, rather than as substitutes for social, emotional, or companionship roles, thereby counteracting the 'retreat from the real' and degradation of social connectedness