Back to the MIT repository
3. Misinformation3 - Other

False information

The chatbot outputs information that contradicts known facts, authoritative sources, or provided source documents (also known as hallucination).

Source: MIT AI Risk Repositorymit1395

ENTITY

2 - AI

INTENT

3 - Other

TIMING

3 - Other

Risk ID

mit1395

Domain lineage

3. Misinformation

74 mapped risks

3.1 > False or misleading information

Mitigation strategy

1. **Implement Retrieval-Augmented Generation (RAG) Architecture** Integrate external, verified knowledge bases into the LLM workflow to dynamically ground responses in factual, real-time data. This significantly reduces the model's reliance on its static training data, thereby mitigating the creation of fabricated information (hallucinations) and enhancing factual accuracy. 2. **Establish Multi-Layered Output Validation and Human-in-the-Loop Processes** Deploy automated systems for contextual grounding checks and set confidence thresholds on generated outputs. Outputs that fail validation or fall below the factual confidence threshold must be routed for mandatory human oversight and verification to remediate inaccuracies before they are presented to the end-user. 3. **Utilize Advanced Prompt Engineering and Behavior Shaping** Employ inference-time techniques such as Chain-of-Thought (CoT) prompting to enforce logical, step-by-step reasoning, and use context injection (e.g., explicit safety reminders, instructions to cite sources, and directives to express uncertainty) to constrain the model's behavior and encourage factuality.