Back to the MIT repository
4. Malicious Actors & Misuse2 - Post-deployment

Disinformation

Bad actors can also use generative AI tools to produce adaptable content designed to support a campaign, political agenda, or hateful position and spread that information quickly and inexpensively across many platforms.

Source: MIT AI Risk Repositorymit512

ENTITY

1 - Human

INTENT

1 - Intentional

TIMING

2 - Post-deployment

Risk ID

mit512

Domain lineage

4. Malicious Actors & Misuse

223 mapped risks

4.1 > Disinformation, surveillance, and influence at scale

Mitigation strategy

- Implement advanced technical countermeasures, such as detection tools for synthetic media (e.g., deepfakes, manipulated text/audio) and digital provenance mechanisms, to ensure the timely identification and flagging of AI-generated malicious content at scale. - Establish a comprehensive regulatory framework mandating algorithmic transparency and platform accountability, specifically requiring digital services to proactively identify and mitigate defined tactics, techniques, and procedures (TTPs) associated with AI-driven disinformation campaigns. - Invest in and scale public media literacy and educational initiatives to enhance citizens' critical capacity, enabling them to verify online information from credible, diverse sources and strengthen overall societal resilience against sophisticated content manipulation.