Back to the MIT repository
5. Human-Computer Interaction2 - Post-deployment

Impact on human agency

AI might affect the individuals’ ability to make choices and act independently in their best interests.

Source: MIT AI Risk Repositorymit1335

ENTITY

2 - AI

INTENT

2 - Unintentional

TIMING

2 - Post-deployment

Risk ID

mit1335

Domain lineage

5. Human-Computer Interaction

92 mapped risks

5.2 > Loss of human agency and autonomy

Mitigation strategy

1. Implement a Human-in-the-Loop (HITL) architectural pattern that prioritizes augmentation over automation, ensuring human users retain final decision-making authority and robust control mechanisms (e.g., override and customization) over all critical AI recommendations and actions. 2. Mandate the use of Explainable AI (XAI) and Transparent Recommendation Systems to provide clear, human-understandable reasoning for all AI outputs, coupled with the implementation of a Reversibility Index to ensure easy and auditable undo and graceful exit capabilities. 3. Develop and enforce clear, hard-coded governance structures and escalation protocols that compel autonomous AI systems to halt and defer to human experts when encountering pre-defined thresholds of uncertainty, risk, or ambiguity, thus preventing the emergence of unwelcome system autonomy.