Back to the MIT repository
5. Human-Computer Interaction2 - Post-deployment

Healthcare

the use of advanced AI for elderly- and child-care are subject to risk of psychological manipulation and misjudgment (see page 17). In addition, concerns about patients’ privacy when AI uses medical records to research new diseases is bringing lots of attention towards the need to better govern data privacy and patients’ rights.

Source: MIT AI Risk Repositorymit623

ENTITY

2 - AI

INTENT

1 - Intentional

TIMING

2 - Post-deployment

Risk ID

mit623

Domain lineage

5. Human-Computer Interaction

92 mapped risks

5.1 > Overreliance and unsafe use

Mitigation strategy

1. Mandate a "Human-in-the-Loop" Oversight Mechanism and Ethical Guardrails for Care AI: Implement governance protocols requiring human monitoring and intervention to prevent psychological manipulation and foster appropriate emotional boundaries. All AI-generated outputs and decisions impacting care protocols must be auditable and subject to clinical or caregiver review to mitigate misjudgment risk and ensure the preservation of user autonomy. 2. Implement Privacy-by-Design with Advanced De-Identification for Research Data: Prioritize data minimization and employ state-of-the-art de-identification and pseudonymization techniques (e.g., differential privacy) on all medical records used for disease research. Strict adherence to regulatory frameworks (e.g., HIPAA, GDPR) and robust access controls are required to safeguard patient confidentiality and ensure compliance with informed consent models. 3. Establish an Integrated AI Risk Management and Transparency Framework: Adopt a formal risk management framework (e.g., NIST AI RMF) to govern the AI lifecycle, from development to deployment. This framework must mandate model transparency (Explainable AI - XAI) to allow stakeholders to understand decision rationale, establish clear accountability structures for all AI-driven outcomes, and require continuous system monitoring for vulnerabilities or anomalous behavior.