Back to the MIT repository
4. Malicious Actors & Misuse2 - Post-deployment

Driving opinion manipulation

AI assistants may facilitate large-scale disinformation campaigns by offering novel, covert ways for propagandists to manipulate public opinion. This could undermine the democratic process by distorting public opinion and, in the worst case, increasing skepticism and political violence.

Source: MIT AI Risk Repositorymit436

ENTITY

1 - Human

INTENT

1 - Intentional

TIMING

2 - Post-deployment

Risk ID

mit436

Domain lineage

4. Malicious Actors & Misuse

223 mapped risks

4.1 > Disinformation, surveillance, and influence at scale

Mitigation strategy

1. Prioritize Technical Safeguards and Integrity: Implement and rigorously test robust safety mechanisms, such as Prompt Guarding and advanced content filtering, across the AI development lifecycle to consistently prevent the mass generation of manipulative content (e.g., deepfakes or disinformation) and ensure model adherence to ethical and safety parameters. 2. Establish a Governance and Accountability Framework: Institute mandatory, enhanced regulation, transparency, and routine auditing of AI developers to hold them accountable for potential misuse and to ensure public disclosure of the AI's safety mechanisms and policies for managing content manipulation and influence campaigns. 3. Foster Societal and Systemic Resilience: Deploy advanced, "defensive AI" tools to actively monitor and counteract malicious agents and narratives, concurrently with substantial investment in broad-based media and AI literacy programs to equip the public with the critical thinking skills necessary to resist sophisticated political manipulation.