Hallucinations
LLMs generate nonsensical, untruthful, and factual incorrect content
ENTITY
2 - AI
INTENT
3 - Other
TIMING
2 - Post-deployment
Risk ID
mit38
Domain lineage
3. Misinformation
3.1 > False or misleading information
Mitigation strategy
1. Implement Retrieval-Augmented Generation (RAG). Integrate real-time knowledge retrieval from verified, external knowledge bases to ground the LLM's response generation process in factual evidence. This approach significantly reduces the generation of unsupported content by anchoring the output to external, reliable sources. 2. Apply Structured Prompt Engineering with Constraint Enforcement. Utilize advanced prompting techniques, such as Chain-of-Thought (CoT) reasoning, to compel the model to outline its logical steps prior to generating a final answer. Crucially, enforce explicit constraints in the prompt, instructing the model to generate content exclusively from the retrieved context and to abstain from answering when information is absent. 3. Fortify Factual Alignment through Fine-Tuning. Enhance the model's intrinsic reliability by conducting Supervised Fine-Tuning (SFT) and alignment processes (e.g., RLHF or DPO) on curated datasets that explicitly prioritize factual accuracy. This systemic training reinforces the model's internal knowledge and behavioral policy to inherently favor truthfulness over fluency.