Back to the MIT repository
7. AI System Safety, Failures, & Limitations2 - Post-deployment

Lack of ability to generate accurate information

AI models may generate false or misleading information due to their lack of capability in discerning truth.

Source: MIT AI Risk Repositorymit1074

ENTITY

2 - AI

INTENT

2 - Unintentional

TIMING

2 - Post-deployment

Risk ID

mit1074

Domain lineage

7. AI System Safety, Failures, & Limitations

375 mapped risks

7.3 > Lack of capability or robustness

Mitigation strategy

1. Implement rigorous data governance and auditing protocols, focusing on dataset quality, diversity, and representativeness to minimize the prevalence of misinformation and systemic bias within the model's training corpus. 2. Deploy Retrieval-Augmented Generation (RAG) architectures to ground model outputs in verifiable, authoritative external knowledge sources, coupled with automated runtime guardrails to enforce factual adherence and policy compliance during inference. 3. Mandate the integration of advanced prompt engineering techniques, such as Chain-of-Thought verification and explicit uncertainty instructions, and establish a mandatory human-in-the-loop review process for all high-stakes outputs to validate claims against external evidence before final utilization.