Back to the MIT repository
3. Misinformation2 - Post-deployment

Propagating misconceptions / false beliefs

Generating or spreading false, low-quality, misleading, or inaccurate information that causes people to develop false or inaccurate perceptions and beliefs

Source: MIT AI Risk Repositorymit1344

ENTITY

2 - AI

INTENT

3 - Other

TIMING

2 - Post-deployment

Risk ID

mit1344

Domain lineage

3. Misinformation

74 mapped risks

3.1 > False or misleading information

Mitigation strategy

1. Prioritize and enforce rigorous data governance protocols to ensure all AI training data is of high quality, complete, and verifiable. This includes conducting regular data audits and deploying Retrieval-Augmented Generation (RAG) architectures to anchor generative models to verified, external knowledge bases, thereby reducing the incidence of factual "hallucinations." 2. Implement robust human-in-the-loop oversight mechanisms that require human review and validation of all high-risk AI-generated outputs before their dissemination. Concurrently, mandate the maintenance of comprehensive, immutable audit trails and system logs to ensure transparency, traceability, and accountability for all AI-driven decisions and content production. 3. Initiate sustained, multi-level digital literacy and cognitive resilience campaigns targeting both employees and end-users. These programs must educate stakeholders on how to identify AI-driven deception (e.g., deepfakes, sophisticated misinformation) and establish clear organizational protocols for cross-referencing and verifying information authenticity across multiple reputable sources.