Back to the MIT repository
4. Malicious Actors & Misuse2 - Post-deployment

Manipulation of public opinion

Malicious actors can use general- purpose AI to generate fake content such as text, images, or videos, for attempts to manipulate public opinion. Researchers believe that if successful, such attempts could have several harmful consequences.

Source: MIT AI Risk Repositorymit1021

ENTITY

1 - Human

INTENT

1 - Intentional

TIMING

2 - Post-deployment

Risk ID

mit1021

Domain lineage

4. Malicious Actors & Misuse

223 mapped risks

4.1 > Disinformation, surveillance, and influence at scale

Mitigation strategy

1. Mandate the implementation of a comprehensive AI Safety and Security Framework for general-purpose AI models, particularly those posing systemic risk, which includes continuous assessment and mitigation of misuse capabilities, rigorous adversarial testing, and the technical requirement to embed traceability or watermarks into all machine-generated outputs. 2. Systematically integrate media and AI literacy education, coupled with psychological inoculation (pre-bunking) strategies, across public-facing channels to build user-level resilience by equipping individuals with the critical thinking skills and cognitive tools necessary to recognize and resist manipulative, synthetic, or artificially amplified content. 3. Deploy and continually update advanced AI-powered content detection and network analysis systems, utilizing natural language processing and machine learning, to rapidly identify, track, and attribute the sources and coordinated dissemination patterns of deepfakes and disinformation across platforms, thereby enabling timely flagging and counter-narrative dissemination.