Misinformation
The phenomenon of inaccurate outputs by text-generating large language models like Bard or ChatGPT has already been widely documented. Even without the intent to lie or mislead, these generative AI tools can produce harmful misinformation. The harm is exacerbated by the polished and typically well-written style that AI generated text follows and the inclusion among true facts, which can give falsehoods a veneer of legitimacy. As reported in the Washington Post, for example, a law professor was included on an AI-generated “list of legal scholars who had sexually harassed someone,” even when no such allegation existed.10
ENTITY
2 - AI
INTENT
2 - Unintentional
TIMING
2 - Post-deployment
Risk ID
mit513
Domain lineage
3. Misinformation
3.1 > False or misleading information
Mitigation strategy
1. Prioritize Retrieval-Augmented Generation (RAG) implementation to ground LLM outputs in verified external knowledge bases, which significantly reduces the incidence of factual inaccuracies and hallucinations by connecting the model to authoritative, domain-specific data. 2. Mandate stringent human oversight and cross-verification protocols for all LLM-generated content intended for critical or public use. This includes deploying automated output validation mechanisms to check claims against trusted sources and detect inconsistencies before information is disseminated. 3. Enforce transparency by clearly communicating the inherent limitations and potential for hallucination to users, coupled with mandatory training programs to cultivate critical thinking and prevent overreliance on AI-generated content.