Back to the MIT repository
5. Human-Computer Interaction3 - Other

Gradual loss of control

Gradual or accumulative loss of control risks can be described as risks resulting from the accumulation of less severe disruptions that gradually weakens systemic resilience until a critical event triggers a catastrophe [12], [127].

Source: MIT AI Risk Repositorymit1392

ENTITY

3 - Other

INTENT

2 - Unintentional

TIMING

3 - Other

Risk ID

mit1392

Domain lineage

5. Human-Computer Interaction

92 mapped risks

5.2 > Loss of human agency and autonomy

Mitigation strategy

1. Prioritize Transparency and Agency Preservation Implement rigorous standards for AI system **transparency** and **interpretability** to ensure human operators maintain a clear understanding of the system's logic and decision-making processes. Mandate the design of **human-centric feedback loops** that clearly link operator actions to predictable outcomes, thereby preserving human **agency** and preventing the psychological toll associated with "black box" system mediation. 2. Enhance Systemic Resilience Develop and institutionalize **Systemic Resilience Strategies** across AI-integrated sociotechnical systems. These strategies must incorporate continuous **anticipation** and **monitoring** functions, coupled with a robust **Adaptive Capacity** to detect and absorb minor, accumulating disruptions and reorganize the system, preventing the gradual erosion of overall systemic integrity prior to a critical event. 3. Mandate Human Competence and Governance Establish mandatory, scenario-based **Human-Computer Interaction (HCI) training** focused on correcting over-reliance, building intuitive understanding of system limitations, and fostering vigilance. Complement this with clear **governance policies** and **emergency response plans** that define human roles, responsibilities, and clear intervention protocols for managing unexpected system deviations.