Back to the MIT repository
4. Malicious Actors & Misuse2 - Post-deployment

Personalized disinformation

Automatic generation of disinformation can be personalized to target specific groups or individuals. Such attacks can be more effective in achieving their goals, and their costs can be significantly reduced when using GPAIs.

Source: MIT AI Risk Repositorymit1185

ENTITY

1 - Human

INTENT

1 - Intentional

TIMING

2 - Post-deployment

Risk ID

mit1185

Domain lineage

4. Malicious Actors & Misuse

223 mapped risks

4.1 > Disinformation, surveillance, and influence at scale

Mitigation strategy

1. Mandate the implementation of robust content authentication and provenance mechanisms, such as embedding non-removable digital watermarks and traceable metadata into all AI-generated content to clearly and immediately signal its synthetic origin to end-users and downstream platforms. 2. Develop and deploy advanced, sustained media literacy and cognitive resilience programs across public and private sectors to equip individuals with critical thinking skills to recognize and resist the psychological and social engineering tactics inherent in hyper-targeted, personalized disinformation. 3. Establish and enforce stringent governance frameworks for high-capability General Purpose AI (GPAI) models, including strict access controls, version tracking, and adversarial testing, to prevent their unauthorized or malicious deployment for mass-scale, personalized influence operations.