Back to the MIT repository
3. Misinformation2 - Post-deployment

Misinformation Harms

AI systems generating and facilitating the spread of inaccurate or misleading information that causes people to develop false beliefs

Source: MIT AI Risk Repositorymit262

ENTITY

2 - AI

INTENT

3 - Other

TIMING

2 - Post-deployment

Risk ID

mit262

Domain lineage

3. Misinformation

74 mapped risks

3.0 > Misinformation

Mitigation strategy

1. Implement mandatory content provenance and watermarking mechanisms (e.g., cryptographic, statistical) within generative AI models to securely embed and maintain metadata that verifies the origin and edit history of all synthesized outputs (text, image, audio, video). This facilitates content traceability to the source model and enhances platform-level detection efficacy. 2. Develop and deploy evidence-based pre-bunking and media literacy interventions—such as inoculation games and public awareness campaigns—to equip individuals with the cognitive skills necessary to critically evaluate content, recognize common deception techniques, and increase their resilience to believing and sharing misinformation. 3. Enforce rigorous platform moderation policies that utilize a combination of automated detection tools (e.g., Natural Language Processing and multimodal analysis) and human fact-checkers to apply clear contextualization labels (e.g., disputed claims warnings), algorthmically reduce the visibility and monetization of flagged content, and systematically issue corrections for confirmed false narratives.

ADDITIONAL EVIDENCE

Example: An AI-generated image that was widely circulated on Twitter led several news outlets to falsely report that an explosion had taken place at the US Pentagon, causing a brief drop in the US stock market (Alba, 2023)