Back to the MIT repository
5. Human-Computer Interaction3 - Other

Passive loss of control

...where humans gradually stop exercising meaningful oversight due to automation bias, the AI systems' inherent complexity, or competitive pressures

Source: MIT AI Risk Repositorymit1450

ENTITY

3 - Other

INTENT

2 - Unintentional

TIMING

3 - Other

Risk ID

mit1450

Domain lineage

5. Human-Computer Interaction

92 mapped risks

5.2 > Loss of human agency and autonomy

Mitigation strategy

1. Establish rigorous and continuous training for oversight personnel focused on mitigating automation bias, cultivating critical AI literacy, and mandating the explicit justification of human decisions, especially when accepting or overriding high-stakes AI recommendations, to ensure meaningful human judgment is exercised. 2. Develop and integrate advanced human-machine interfaces that provide real-time explainability (XAI) and clear audit trails of the AI's reasoning, alongside easily accessible technical capabilities for human intervention and override to maintain operator agency against inherent system complexity. 3. Implement a formal accountability framework that designates clear human ownership and responsibility for AI-driven outcomes, requiring the meticulous documentation of all oversight activities and interventions to prevent the passive diffusion of control and ensure legal and ethical compliance.