Political
In the UK, a form of initial computational propaganda has already happened during the Brexit referendum1 . In future, there are concerns that oppressive governments could use AI to shape citizens’ opinions
ENTITY
1 - Human
INTENT
1 - Intentional
TIMING
2 - Post-deployment
Risk ID
mit618
Domain lineage
4. Malicious Actors & Misuse
4.1 > Disinformation, surveillance, and influence at scale
Mitigation strategy
1. Prioritize the development and implementation of comprehensive AI and media literacy curricula to foster citizen resilience against manipulative content and tactics (Source 2, 6, 8). This strategy aims to strengthen the cognitive and critical-thinking skills required for discerning authentic information from automated propaganda, often referred to as "pre-bunking" (Source 8, 13). 2. Mandate robust regulatory and governance frameworks that enforce transparency and accountability for the deployment of AI in public communication (Source 2, 17, 20). This includes legislative acts that require algorithmic transparency, impose penalties for platform non-compliance in addressing coordinated inauthentic behavior, and establish clear ethical guidelines for political communication to prevent the absence of human accountability (Source 1, 2, 17). 3. Invest in the development and deployment of advanced, real-time computational detection techniques to identify and inhibit AI-generated disinformation campaigns (Source 1, 8). This requires creating specialized machine learning models and algorithms capable of identifying increasingly believable automated text, deepfakes, and coordinated account activity at scale and speed (Source 1, 6, 8).