Back to the MIT repository
3. Misinformation2 - Post-deployment

Information harms

information-based harms capture concerns of misinformation, disinformation, and malinformation. Algorithmic systems, especially generative models and recommender, systems can lead to these information harms

Source: MIT AI Risk Repositorymit153

ENTITY

3 - Other

INTENT

2 - Unintentional

TIMING

2 - Post-deployment

Risk ID

mit153

Domain lineage

3. Misinformation

74 mapped risks

3.1 > False or misleading information

Mitigation strategy

1. Mandate the implementation of data quality and integrity measures, including regular checks and audits of training data for completeness and verified provenance. For generative models, prioritize the deployment of Retrieval Augmented Generation (RAG) architectures to connect the system to verified, authoritative external data sources, thereby mitigating the risk of hallucination and factual error at the source. 2. Institute rigorous output validation protocols, incorporating both automated AI-based detection tools for identifying inaccurate or anomalous content and a human-in-the-loop (HITL) oversight mechanism for critical or high-risk outputs. This process must also include continuous monitoring and auditing of system outputs and usage patterns to promptly detect and contain the algorithmic amplification of misleading information. 3. Proactively develop and deploy sustained digital literacy and cognitive resilience campaigns aimed at fostering the end-user's ability to critically assess information credibility. Furthermore, enhance transparency by providing users with clear explanations of the model's reasoning and the provenance of the generated information to support informed decision-making.

ADDITIONAL EVIDENCE

Users are increasingly exposed to information assembled and presented algorithmically, and many users lack the literacy to comprehend how algorithms influence what they can and cannot see