Back to the MIT repository
5. Human-Computer Interaction2 - Post-deployment

Human-AI Configuration

Arrangement s of or interactions between a human and an AI system which can result in the human inappropriately anthropomorphizing GAI systems or experiencing algorithmic aversion, automation bias, over-reliance, or emotional entanglement with GAI systems.

Source: MIT AI Risk Repositorymit762

ENTITY

3 - Other

INTENT

2 - Unintentional

TIMING

2 - Post-deployment

Risk ID

mit762

Domain lineage

5. Human-Computer Interaction

92 mapped risks

5.1 > Overreliance and unsafe use

Mitigation strategy

1. Establish clear Human-AI Configurations with defined **Task Allocation and Oversight Protocols**, ensuring that human judgment is required for high-stakes decisions and operators are provided with the ability to override or disengage the AI system, which directly mitigates automation bias and over-reliance. 2. Implement **Explainable AI (XAI) and Transparency** features, such as providing confidence levels, source attribution, or simple explanations of the system's capabilities and limitations (realistic mental models), alongside **Cognitive Forcing Functions** (e.g., confirmation dialogues) to prompt and facilitate user verification of AI-generated content. 3. Apply **De-anthropomorphization Design Principles** to system outputs and interaction patterns, which includes explicitly disclosing the system's non-human nature and rigorously avoiding anthropomorphic linguistic cues such as first-person pronouns, cognitive verbs, or expressions of emotion, to mitigate inappropriate anthropomorphization and emotional entanglement.