Back to the MIT repository
5. Human-Computer Interaction2 - Post-deployment

Interpersonal Harms

Interpersonal harms capture instances when algorithmic systems adversely shape relations between people or communities.

Source: MIT AI Risk Repositorymit147

ENTITY

3 - Other

INTENT

3 - Other

TIMING

2 - Post-deployment

Risk ID

mit147

Domain lineage

5. Human-Computer Interaction

92 mapped risks

5.2 > Loss of human agency and autonomy

Mitigation strategy

1. Prioritize the development and deployment of **Transparent and Explainable** AI systems, utilizing interpretable machine learning techniques and providing users with clear, contextualized insights into algorithmic decision-making processes to enable informed judgment and critical evaluation. 2. Mandate the integration of **Human-in-the-Loop (HITL) mechanisms** across critical use cases to ensure human oversight, validation, and the explicit ability for users to override or modify algorithmic outputs, thereby actively retaining human authority and operational agency. 3. Implement rigorous **Non-Coercive and Human-Centered Design** principles, strategically introducing "seamful design" or "design friction" to foster user awareness of reliance (counteracting agency decay) and preventing system architectures that manipulate, pressure, or nudge individuals toward outcomes that undermine their personal or professional autonomy.