Back to the MIT repository
5. Human-Computer Interaction2 - Post-deployment

Trustworthiness and Autonomy

Human trust in systems, institutions, and people represented by system outputs evolves as generative AI systems are increasingly embedded in daily life.

Source: MIT AI Risk Repositorymit174

ENTITY

1 - Human

INTENT

3 - Other

TIMING

2 - Post-deployment

Risk ID

mit174

Domain lineage

5. Human-Computer Interaction

92 mapped risks

5.2 > Loss of human agency and autonomy

Mitigation strategy

1. Mandate the implementation of robust provenance and transparency mechanisms, specifically requiring the clear and persistent labeling or watermarking of all generative AI system outputs to establish content origin and facilitate distinction from human-authored material. 2. Establish a comprehensive human-in-the-loop (HITL) governance model for high-stakes generative AI applications, ensuring that human operators retain ultimate oversight and the capacity for critical validation of outputs before deployment or dissemination, thereby preserving human agency and reliability. 3. Develop and deploy extensive digital literacy and cognitive resilience programs focused on educating users and the public to critically evaluate AI-generated content, recognize the signs of misinformation and deepfakes, and actively counter the erosion of trust and critical thinking skills.

ADDITIONAL EVIDENCE

WIth the increased ease of access to creating machine generated content, which produce misinformation [260] as a product, distinguishing between human and machine generated content, verified and misinformation, will become increasingly difficult and poses a series of threats to trust in media and what we can experience with our own hearing and vision.