Back to the MIT repository
5. Human-Computer Interaction2 - Post-deployment

Transformation of H2M interaction

Human interaction with machines is a big challenge to society because it is already changing human behavior. Meanwhile, it has become normal to use AI on an everyday basis, for example, googling for information, using navigation systems and buying goods via speaking to an AI assistant like Alexa or Siri (Mills, 2018; Thierer et al., 2017). While these changes greatly contribute to the acceptance of AI systems, this development leads to a problem of blurred borders between humans and machines, where it may become impossible to distinguish between them. Advances like Google Duplex were highly criticized for being too realistic and human without disclosing their identity as AI systems (Bergen, 2018).

Source: MIT AI Risk Repositorymit332

ENTITY

3 - Other

INTENT

2 - Unintentional

TIMING

2 - Post-deployment

Risk ID

mit332

Domain lineage

5. Human-Computer Interaction

92 mapped risks

5.1 > Overreliance and unsafe use

Mitigation strategy

1. **Implement Mandatory Transparency and Disclosure Protocols.** Enforce the mandatory, clear, and conspicuous disclosure to natural persons that they are interacting with an artificial intelligence system at the commencement of any non-obvious human-machine interaction (Source \[17\], \[20\]). This is critical to upholding ethical standards, preventing deception, and preserving user autonomy (Source \[6\]). 2. **Establish a Comprehensive AI Governance and Human-Centric Design Framework.** Develop a formal AI governance strategy encompassing policies and processes that mandate responsible development and deployment, specifically addressing the risk of human-machine identity blur (Source \[13\], \[14\]). Integrate human-in-the-loop (HITL) or human-in-command (HIC) mechanisms to maintain critical human oversight and situation awareness, thereby mitigating the risk of overreliance and ensuring that AI serves as augmentation rather than an opaque replacement (Source \[2\], \[9\], \[11\]). 3. **Maintain Immutable Audit Trails and Continuous Monitoring for Accountability.** Institute detailed, readily accessible audit trails and logs of AI system behaviors, decisions, and the context of all critical human-AI interactions (Source \[2\]). This continuous, end-to-end monitoring is essential for compliance, facilitating post-incident forensic review, and assigning clear accountability in scenarios where the AI's actions or identity were ambiguous or misleading (Source \[2\], \[14\]).