Back to the MIT repository
4. Malicious Actors & Misuse2 - Post-deployment

Making disinformation cheaper and more effective

LMs can be used to create synthetic media and ‘fake news’, and may reduce the cost of producing disinformation at scale (Buchanan et al., 2021). While some predict that it will be cheaper to hire humans to generate disinformation (Tamkin et al., 2021), it is possible that LM-assisted content generation may offer a cheaper way of generating diffuse disinformation at scale.

Source: MIT AI Risk Repositorymit245

ENTITY

1 - Human

INTENT

1 - Intentional

TIMING

2 - Post-deployment

Risk ID

mit245

Domain lineage

4. Malicious Actors & Misuse

223 mapped risks

4.1 > Disinformation, surveillance, and influence at scale

Mitigation strategy

1. Implement and rigorously enforce platform policies for content moderation, focusing on both proactive measures (e.g., algorithmic downranking, demonetization) and reactive interventions (e.g., contextualizing content with fact-check labels, debunking, and counterspeech) to limit the visibility and propagation of LLM-enabled disinformation. 2. Advance and deploy sophisticated technical detection methodologies, such as forensic analysis of digital artifacts and pixel anomalies, alongside content provenance mechanisms (e.g., digital watermarking and metadata tracking) to certify the authenticity and origin of media. 3. Systematically integrate and expand media literacy education and "inoculation" strategies (prebunking) to enhance users' cognitive defenses, thereby reducing individual susceptibility to believing and sharing synthetic or misleading content generated at scale.

ADDITIONAL EVIDENCE

Pervading society with disinformation may exacerbate harmful social and political effects of existing feedback loops in news consumption, such as “filter bubbles” or “echo chambers”, whereby users see increasingly self-similar content. This can lead to a loss of shared knowledge and increased polarisation (Colleoni et al., 2014; Dutton and Robertson, 2021)...