Back to the MIT repository
5. Human-Computer Interaction2 - Post-deployment

Reduced Autonomy/Responsibility

AI is providing more and more solutions for complex activities, and by taking advantage of this process, people are becoming able to perform a greater number of activities more quickly and accurately. However, the result of this innovation is enabling choices that were once exclusively human responsibility to be made by AI systems.

Source: MIT AI Risk Repositorymit131

ENTITY

3 - Other

INTENT

2 - Unintentional

TIMING

2 - Post-deployment

Risk ID

mit131

Domain lineage

5. Human-Computer Interaction

92 mapped risks

5.2 > Loss of human agency and autonomy

Mitigation strategy

1. Implement robust Human-in-the-Loop (HIL) systems to ensure that final decision-making authority and ultimate accountability rest with human operators, particularly in safety-critical and ethically sensitive domains. This requires designing clear mechanisms for human oversight, validation, and override of AI outputs to maintain control. 2. Integrate comprehensive Transparency and Explainability (XAI) features within AI systems to provide users with clear, understandable insights into the model's logic, assumptions, and recommendations. This is essential for fostering informed human judgment, building trust, and preventing excessive over-reliance on opaque algorithmic processes. 3. Employ a User-Centered and Non-Coercive Design paradigm to ensure AI functions as a tool to *augment* human capabilities rather than substitute human judgment. This includes implementing customization options and design strategies that actively strengthen human agency and critical thinking, such as incorporating feedback loops or Socratic method-based interaction models to counter cognitive deskilling.