Back to the MIT repository
3. Misinformation2 - Post-deployment

-

AI systems generating and facilitating the spread of inaccurate or misleading information that causes people to develop false beliefs

Source: MIT AI Risk Repositorymit1343

ENTITY

2 - AI

INTENT

3 - Other

TIMING

2 - Post-deployment

Risk ID

mit1343

Domain lineage

3. Misinformation

74 mapped risks

3.1 > False or misleading information

Mitigation strategy

1. Implement rigorous data quality and integrity measures, complemented by Retrieval Augmented Generation (RAG) architectures, to anchor generative AI outputs to verified, trusted data sources and significantly reduce the propensity for fabrication or hallucination at the source. 2. Establish a multi-layered detection and verification system to proactively identify inaccurate outputs, utilizing advanced models to measure "faithfulness" to source documents and integrating content provenance technologies, such as digital watermarking and metadata, to clearly label AI-generated content. 3. Integrate mandatory human oversight and governance processes by establishing explicit control points within the content generation workflow that require human review and decision-making, thereby providing a final layer of checks and balances to prevent the unauthorized or erroneous spread of misinformation.