Back to the MIT repository
3. Misinformation2 - Post-deployment

Misleading Information

Large models are usually susceptible to hallucination problems, sometimes yielding nonsensical or unfaithful data that results in misleading outputs.

Source: MIT AI Risk Repositorymit67

ENTITY

2 - AI

INTENT

2 - Unintentional

TIMING

2 - Post-deployment

Risk ID

mit67

Domain lineage

3. Misinformation

74 mapped risks

3.1 > False or misleading information

Mitigation strategy

1. Integrate Retrieval-Augmented Generation (RAG) architecture to ground model outputs in validated external knowledge sources, thereby enhancing factual consistency and reducing the generation of unfounded content. 2. Institute robust detection mechanisms, such as contextual grounding checks and hallucination scoring within agentic workflows, to quantitatively measure factual fidelity and flag responses falling below a predefined confidence threshold. 3. Establish mandatory human-in-the-loop oversight and verification processes, especially for outputs generated in critical, high-stakes domains, to serve as a final corrective control against residual probabilistic errors.