Back to the MIT repository
5. Human-Computer Interaction3 - Other

Autonomy/agency loss

Autonomy/agency loss - Loss of an individual, group or organisation’s ability to make informed decisions or pursue goals.

Source: MIT AI Risk Repositorymit942

ENTITY

3 - Other

INTENT

3 - Other

TIMING

3 - Other

Risk ID

mit942

Domain lineage

5. Human-Computer Interaction

92 mapped risks

5.2 > Loss of human agency and autonomy

Mitigation strategy

1. Establish and mandate Human-in-the-Loop (HITL) and Human-in-Command (HIC) protocols for all critical decision-making processes involving autonomous systems. This requires defining specific risk-based thresholds where human review, intervention, and final override authority are essential to mitigate the loss of moral and practical human agency. 2. Implement and enforce comprehensive AI governance frameworks that define and restrict the scope of autonomous operations. This includes applying a strict, minimal-privilege (zero-trust) approach to AI agent permissions, utilizing whitelists for approved functions and tools, and conducting regular audits to monitor and preemptively control behavioral drift or excessive agency. 3. Prioritize the design of systems with high transparency and interpretability to counteract the erosion of autonomy caused by "black box" algorithms. Mitigation involves developing methods for clear Agent-to-Human (A2H) communication, which explains the rationale and data sources for AI decisions, thereby enabling human operators to maintain contextual awareness and make informed choices.