Back to the MIT repository
5. Human-Computer Interaction2 - Post-deployment

Self-harm

A person who deliberately damages their own body as a direct or indirect result of using a technology system

Source: MIT AI Risk Repositorymit1372

ENTITY

1 - Human

INTENT

1 - Intentional

TIMING

2 - Post-deployment

Risk ID

mit1372

Domain lineage

5. Human-Computer Interaction

92 mapped risks

5.1 > Overreliance and unsafe use

Mitigation strategy

1. Prioritized Crisis Response Integration: Implement mandatory, real-time crisis detection mechanisms that immediately suspend algorithmic interaction and provide prominent, geographically-relevant contact information for 24/7 human-staffed professional services to ensure the most critical risk of immediate self-harm is addressed by qualified support. 2. Mitigation of Overreliance through HCI Design: Enforce a 'human-in-the-loop' design principle by explicitly communicating the system's non-professional limitations and integrating features that actively facilitate and encourage the use of licensed human professionals, thereby mitigating the risk of users substituting professional care with the technology. 3. Evidence-Based Content and Algorithmic Validation: Establish a continuous governance and auditing framework to ensure all psychoeducational and coping strategy content is evidence-based (e.g., derived from Cognitive Behavioral Therapy or Dialectical Behavior Therapy) and clinically validated, thereby preventing the output of incorrect, unvetted, or harmful advice and ensuring the promotion of safe self-management techniques.