Back to the MIT repository
4. Malicious Actors & Misuse2 - Post-deployment

Biased influence through citizen screening and tailored propaganda

AI-powered chatbots tailor their communication approach to influence individual users' decisions. In the UK, a form of initial computational propaganda has already happened during the Brexit referendum. In future, there are concerns that oppressive governments could use AI to shape citizens' opinions.

Source: MIT AI Risk Repositorymit619

ENTITY

1 - Human

INTENT

1 - Intentional

TIMING

2 - Post-deployment

Risk ID

mit619

Domain lineage

4. Malicious Actors & Misuse

223 mapped risks

4.1 > Disinformation, surveillance, and influence at scale

Mitigation strategy

1. Mandate algorithmic accountability for high-risk AI systems used in public discourse and governance. This includes implementing rigorous, continuous bias audits and requiring the deployment of Explainable AI (XAI) to ensure transparency in how models arrive at influence-related decisions and content targeting. 2. Accelerate the development and deployment of advanced, multilingual AI-driven FIMI detection and attribution models that provide transparent explanations for content flagging. Concurrently, leverage generative AI to create automated, evidence-based counter-narratives to diminish the effectiveness of mass-produced computational propaganda. 3. Establish sustained, multi-stakeholder programs to foster cognitive resilience and digital literacy among the public. These campaigns must specifically address the persuasive techniques of AI-generated content and hyper-targeted disinformation to strengthen citizens' ability to critically evaluate and resist sophisticated influence operations.