Back to the MIT repository
4. Malicious Actors & Misuse2 - Post-deployment

Information Integrity

Lowered barrier to entry to generate and support the exchange and consumption of content which may not distinguish fact from opinion or fiction or acknowledge uncertainties, or could be leveraged for large-scale dis- and mis-information campaigns.

Source: MIT AI Risk Repositorymit763

ENTITY

1 - Human

INTENT

3 - Other

TIMING

2 - Post-deployment

Risk ID

mit763

Domain lineage

4. Malicious Actors & Misuse

223 mapped risks

4.1 > Disinformation, surveillance, and influence at scale

Mitigation strategy

1. Implement and refine platform-level algorithmic and technical controls (e.g., automated downranking, demonetisation, and removal) to limit the visibility and financial viability of content identified as non-fact-based or manipulative, thereby increasing the effective barrier to entry for large-scale campaigns. 2. Deploy assertive, context-rich veracity labels and provenance cues on potentially deceptive content to facilitate critical evaluation by consumers, coupled with post-hoc debunking mechanisms that correct inaccurate claims with verifiable evidence. 3. Mandate and support scalable media literacy, digital verification training, and psychological inoculation (prebunking) programs to enhance individual cognitive resilience and foster critical thinking skills against common deceptive techniques.