Back to the MIT repository
3. Misinformation3 - Other

Knowledge Gaps

Since the training corpora of LLMs can not contain all possible world knowledge [114]–[119], and it is challenging for LLMs to grasp the long-tail knowledge within their training data [120], [121], LLMs inherently possess knowledge boundaries [107]. Therefore, the gap between knowledge involved in an input prompt and knowledge embedded in the LLMs can lead to hallucinations

Source: MIT AI Risk Repositorymit39

ENTITY

2 - AI

INTENT

2 - Unintentional

TIMING

3 - Other

Risk ID

mit39

Domain lineage

3. Misinformation

74 mapped risks

3.1 > False or misleading information

Mitigation strategy

1. Retrieval-Augmented Generation (RAG) Implementation Implement a robust Retrieval-Augmented Generation (RAG) framework to dynamically ground responses in verifiable, external knowledge sources. This approach effectively expands the model's knowledge boundaries during inference, mitigating factual errors stemming from missing or long-tail knowledge not encoded in the original training corpus. Further enhancement can be achieved through post-hoc consistency checking to verify that the generated output is semantically aligned with the retrieved evidence. 2. Confidence-Based Abstention and Calibration Integrate mechanisms that enable the Large Language Model (LLM) to abstain from generating an answer when its internal confidence (e.g., token probability or a verbalized confidence score) falls below a predetermined, optimized threshold. This strategy addresses the inherent risk of 'rewarded guessing' by penalizing confident errors and promoting humility in the face of genuine knowledge gaps, a particularly relevant technique for multilingual or specialized domains. 3. Advanced Prompt Engineering and Instruction Tuning Employ advanced prompt engineering techniques to govern model behavior and utilization of knowledge. This includes the application of explicit instruction tuning (e.g., directives to 'answer only based on the provided documents') and structured reasoning paths, such as Chain-of-Thought (CoT) prompting. These methods reinforce the intended reliance on grounded information, minimize output drift from user objectives, and enhance logical consistency during knowledge synthesis.