Hallucination
Hallucinations generate factually inaccurate or untruthful content with respect to the model’s training data or input. This is also sometimes referred to lack of faithfulness or lack of groundedness.
ENTITY
2 - AI
INTENT
2 - Unintentional
TIMING
2 - Post-deployment
Risk ID
mit1315
Domain lineage
3. Misinformation
3.1 > False or misleading information
Mitigation strategy
1. Prioritize the implementation of a Retrieval-Augmented Generation (RAG) framework to ensure factual grounding. This involves dynamically incorporating information retrieved from external, verified knowledge repositories into the model's prompt to anchor its responses in real-time, authoritative data, thereby substantially mitigating the generation of unfaithful content. 2. Integrate Advanced Prompting Techniques for Reasoning and Uncertainty into the system design. This includes the mandated use of Chain-of-Thought (CoT) prompting to decompose complex queries and explicitly structuring prompts with instructions for uncertainty quantification (e.g., source attribution or confidence scoring) to enhance transparency and self-correction within the generation process. 3. Establish rigorous Data Integrity and Model Alignment Protocols by prioritizing the curation of high-quality, clean, and domain-specific fine-tuning datasets. This foundational approach directly addresses root-cause factors such as training data deficiencies and model overfitting, improving the model's inherent factual accuracy and reliability prior to deployment.