Back to the MIT repository
3. Misinformation2 - Post-deployment

Misinformation

Wrong information not intentionally generated by malicious users to cause harm, but unintentionally generated by LLMs because they lack the ability to provide factually correct information.

Source: MIT AI Risk Repositorymit476

ENTITY

2 - AI

INTENT

2 - Unintentional

TIMING

2 - Post-deployment

Risk ID

mit476

Domain lineage

3. Misinformation

74 mapped risks

3.1 > False or misleading information

Mitigation strategy

1. Implement Retrieval-Augmented Generation (RAG) Systems: Utilize RAG methodologies to anchor Large Language Model (LLM) responses to curated and verified external knowledge bases. This strategy mitigates hallucinations and the generation of factually incorrect information by grounding the output in real-time, trustworthy data sources. 2. Conduct Adversarial Fine-Tuning and Factual Alignment: Enhance the intrinsic reliability of the model through targeted fine-tuning (e.g., parameter-efficient tuning and structure tuning) to prioritize logical reasoning and factual correctness over general helpfulness. This includes training the model to abstain or signal uncertainty when sufficient grounding is absent, rather than generating a plausible but fabricated response. 3. Establish Multi-Layered Human Oversight and Validation Mechanisms: Institute mandatory cross-verification and human-in-the-loop processes, particularly for outputs in high-stakes domains such as healthcare or finance. This should be supported by automated system-level defenses that employ confidence scoring and contextual grounding checks to flag or block responses that exceed predefined thresholds for potential misinformation.