Back to the MIT repository
5. Human-Computer Interaction3 - Other

Human-AI interaction

ethical concerns associated with the interaction between humans and AI

Source: MIT AI Risk Repositorymit582

ENTITY

3 - Other

INTENT

3 - Other

TIMING

3 - Other

Risk ID

mit582

Domain lineage

5. Human-Computer Interaction

92 mapped risks

5.1 > Overreliance and unsafe use

Mitigation strategy

1. Establish and enforce robust Human-in-the-Loop (HIL) protocols and governance frameworks to ensure continuous human oversight in all critical decision-making processes. This mandates human review of high-stakes AI outputs, clear audit trails, and defined accountability structures to counteract automation bias and mitigate the risk of unintended harm. 2. Implement comprehensive AI literacy and ethical awareness programs for all stakeholders, including developers and end-users. These programs must focus on cultivating appropriate user reliance by educating them on the system's capabilities, limitations, and potential failure modes, thereby facilitating balanced trust calibration and preventing skill erosion. 3. Employ human-centered design principles that prioritize transparency, explainability, and safety. AI systems should provide clear self-disclosure of their non-human nature and offer interpretable rationales for their decisions, thereby building user trust and providing the necessary information for users to challenge or override potentially unsafe or inappropriate recommendations.