Back to the MIT repository
4. Malicious Actors & Misuse2 - Post-deployment

Spreading disinformation

Generative AI models might be used to intentionally create misleading or false information to deceive or influence a targeted audience.

Source: MIT AI Risk Repositorymit1303

ENTITY

1 - Human

INTENT

1 - Intentional

TIMING

2 - Post-deployment

Risk ID

mit1303

Domain lineage

4. Malicious Actors & Misuse

223 mapped risks

4.1 > Disinformation, surveillance, and influence at scale

Mitigation strategy

1. Implement Retrieval Augmented Generation (RAG) and Curation of Training Data Integrate generative AI models with verified, domain-specific external data sources and utilize clean, curated datasets for model training to significantly reduce the propensity for hallucination and increase the factual accuracy of outputs at the source. 2. Mandate Rigorous Human Oversight and Output Validation Workflows Establish human-in-the-loop control points and rigorous governance frameworks requiring mandatory review and fact-checking of all AI-generated content before public dissemination or use in critical organizational decision-making to ensure compliance and veracity. 3. Deploy Technical Provenance and Detection Methodologies Utilize computational countermeasures, including digital watermarking of AI outputs and provenance tracking, alongside advanced multilingual AI models for the automated detection of fabricated or manipulated content to enhance transparency and traceability across the information sphere.