Back to the MIT repository
4. Malicious Actors & Misuse2 - Post-deployment

Disinformation and Influence Operations

In addition to unintentional degradation of the information environment (discussed in the section on Societal Harms above), frontier AI can be misused to deliberately spread false information to create disruption, persuade people on political issues, or cause other forms of harm or damage.

Source: MIT AI Risk Repositorymit1382

ENTITY

1 - Human

INTENT

1 - Intentional

TIMING

2 - Post-deployment

Risk ID

mit1382

Domain lineage

4. Malicious Actors & Misuse

223 mapped risks

4.1 > Disinformation, surveillance, and influence at scale

Mitigation strategy

1. Mandate comprehensive algorithmic transparency and accountability mechanisms, including rigorous bias audits and requirements for diverse training datasets, particularly for AI systems utilized in content moderation and information dissemination to mitigate systemic amplification of false narratives. 2. Accelerate the development and deployment of technical mitigations such as advanced deepfake detection, content provenance standards, and early warning systems to counter the rapid creation and mass dissemination of synthetic disinformation. 3. Implement and sustain large-scale digital literacy and cognitive resilience campaigns integrated into civic education to empower end-users with the critical skills necessary to effectively discern and resist AI-driven disinformation and manipulation.