Back to the MIT repository
4. Malicious Actors & Misuse2 - Post-deployment

Misinformation and disinformation

IIl-intentioned individuals or entities may deliberately use generative AI models to produce and spread disinformation—false or misleading information knowingly presented as if true—on a massive scale. In addition to increasing the scale and reach of disinformation, generative AI can create more convincing and targeted disinformation.

Source: MIT AI Risk Repositorymit735

ENTITY

1 - Human

INTENT

1 - Intentional

TIMING

2 - Post-deployment

Risk ID

mit735

Domain lineage

4. Malicious Actors & Misuse

223 mapped risks

4.1 > Disinformation, surveillance, and influence at scale

Mitigation strategy

1. Implementation of Advanced Content Provenance and Detection Systems: Deploy robust computational methodologies, such as digital watermarking, cryptographic signing, and multi-modal forensic analysis (e.g., linguistic and deepfake detection), to efficiently verify the authenticity and source of AI-generated content, thereby enabling the early and scaled identification of malicious synthetic media. 2. Rigorous Model Governance and Security Protocols: Establish stringent access controls, version tracking, and encryption for generative AI models to prevent unauthorized internal or external use (e.g., Shadow AI), intentional poisoning, and algorithmic manipulation by ill-intentioned actors seeking to amplify disinformation. 3. Cultivation of Cognitive and Systemic Resilience: Proactively implement organizational crisis communication frameworks for managing large-scale misinformation attacks and invest in scaled digital and media literacy programs to equip users with the critical thinking skills necessary to discern synthetic content and mitigate the manipulative impact of disinformation campaigns.