Back to the MIT repository
5. Human-Computer Interaction2 - Post-deployment

Interaction risks

Many novel risks posed by generative AI stem from the ways in which humans interact with these systems. For instance, sources discuss epistemic challenges in distinguishing AI-generated from human content. They also address the issue of anthropomorphization, which can lead to an excessive trust in generative AI systems. On a similar note, many papers argue that the use of conversational agents could impact mental well-being or gradually supplant interpersonal communication, potentially leading to a dehumanization of interactions. Additionally, a frequently discussed interaction risk in the literature is the potential of LLMs to manipulate human behavior or to instigate users to engage in unethical or illegal activities.

Source: MIT AI Risk Repositorymit75

ENTITY

3 - Other

INTENT

3 - Other

TIMING

2 - Post-deployment

Risk ID

mit75

Domain lineage

5. Human-Computer Interaction

92 mapped risks

5.1 > Overreliance and unsafe use

Mitigation strategy

1. Implement Cognitive and Design-Based Reliance Controls: Deploy user interface and system design elements (e.g., transparency statements, output uncertainty expressions, or cognitive forcing functions) to proactively establish accurate user mental models regarding the generative AI's limitations and to enforce deliberate human verification of critical or plausible-but-flawed outputs. 2. Enforce Behavioral Guardrails and Output Filters: Establish technical 'guardrails' within the deployment environment, such as prompt refusals and content moderation filters, specifically designed to detect and prevent inputs and outputs that manipulate human behavior or instigate engagement in unethical, non-compliant, or illegal activities. 3. Integrate Human Oversight with Critical Skill Training: Mandate structured Human-in-the-Loop (HITL) review and validation processes for all consequential or high-risk outputs, accompanied by targeted organizational training to preserve critical thinking, combat automation bias, and maintain domain expertise against the risk of deskilling.