Back to the MIT repository
3. Misinformation2 - Post-deployment

Eroding trust and undermining shared knowledge

AI assistants may contribute to the spread of large quantities of factually inaccurate and misleading content, with negative consequences for societal trust in information sources and institutions, as individuals increasingly struggle to discern truth from falsehood.

Source: MIT AI Risk Repositorymit435

ENTITY

2 - AI

INTENT

3 - Other

TIMING

2 - Post-deployment

Risk ID

mit435

Domain lineage

3. Misinformation

74 mapped risks

3.2 > Pollution of information ecosystem and loss of consensus reality

Mitigation strategy

1. Integrate Retrieval Augmented Generation (RAG) and rigorous data quality validation into AI development pipelines to anchor generative outputs to verifiable, domain-specific sources, thereby reducing hallucination and factual inaccuracy at the source. 2. Mandate the implementation of Explainable AI (XAI) and comprehensive bias auditing for content moderation algorithms to ensure procedural transparency, prevent unintended censorship, and maintain public accountability regarding system outputs and decision-making. 3. Establish human-AI collaborative workflows for content review, leveraging advanced multilingual AI models for rapid FIMI (Foreign Information Manipulation and Interference) detection and attribution, with the final validation and context provision reserved for human subject matter experts and fact-checkers.