Back to the MIT repository
5. Human-Computer Interaction2 - Post-deployment

Loss of Autonomy

Delegating decisions to an AI, especially an AI that is not transparent and not contestable, may leave people feeling helpless, subjected to the decision power of a machine.

Source: MIT AI Risk Repositorymit96

ENTITY

1 - Human

INTENT

1 - Intentional

TIMING

2 - Post-deployment

Risk ID

mit96

Domain lineage

5. Human-Computer Interaction

92 mapped risks

5.2 > Loss of human agency and autonomy

Mitigation strategy

1. Establish a comprehensive contestability framework to ensure affected stakeholders can actively challenge, scrutinize, and influence the outcomes of automated decisions, thereby enhancing legitimacy and providing a corrective mechanism for opaque or biased systems. 2. Mandate and define appropriate levels of human oversight, such as Human-in-the-Loop (HITL) or Human-on-the-Loop (HOTL) models, to mediate critical decisions, preserve human agency, and enable effective intervention based on the system's risk profile. 3. Implement organizational and technical measures to guarantee transparency and responsible disclosure, ensuring users are explicitly notified when an AI materially influences a significant decision and provided with accessible, adequate justifications to facilitate informed review.