Automatically generating disinformation at scale
Disinformation (in various modalities: text, audio, images, video, etc.) can be generated with minimal human oversight and effort. Disinformation tools are relatively cheap and their technology is widely available. Such deployments can be particularly widespread in sensitive political contexts.
ENTITY
1 - Human
INTENT
1 - Intentional
TIMING
2 - Post-deployment
Risk ID
mit1175
Domain lineage
4. Malicious Actors & Misuse
4.1 > Disinformation, surveillance, and influence at scale
Mitigation strategy
1. **Mandate Digital Provenance and Systemic Safeguards:** Require all developers of generative artificial intelligence (AI) systems to embed mandatory digital provenance, such as cryptographic watermarking and metadata, into all synthetic content at the point of creation to facilitate forensic traceability and content authentication. Concurrently, integrate compulsory red-teaming and threat modeling into the pre-release product lifecycle to preemptively identify and mitigate potential malicious use vectors. 2. **Establish Coordinated Regulatory and Transparency Standards:** Institute a global, multi-stakeholder governance framework mandating cross-platform information-sharing regarding disinformation campaigns and requiring full transparency from online platforms concerning their content moderation policies and algorithmic decision-making (XAI). This includes enforcing bias audits and diverse dataset requirements for all high-risk AI training models to ensure fairness and prevent unintentional suppression of legitimate content. 3. **Cultivate Cognitive Resilience through Comprehensive Education:** Implement sustained, evidence-based digital literacy and cognitive resilience programs, integrating them into both academic curricula and targeted public awareness campaigns. These initiatives must focus on teaching the public, especially frequently targeted communities, the psychological tactics of influence operations and the technical characteristics of AI-generated content (e.g., deepfakes), thereby improving their ability to discern and resist disinformation.