Back to the MIT repository
4. Malicious Actors & Misuse2 - Post-deployment

Generative AI use in political influence campaigns

GPAI tools can be used in automation and scaling of influence campaigns [178]. Public opinion may be manipulated by targeted misleading or manipulative information. This can lead to rising political polarization and diminishing trust in public institutions.

Source: MIT AI Risk Repositorymit1177

ENTITY

1 - Human

INTENT

1 - Intentional

TIMING

2 - Post-deployment

Risk ID

mit1177

Domain lineage

4. Malicious Actors & Misuse

223 mapped risks

4.1 > Disinformation, surveillance, and influence at scale

Mitigation strategy

1. Implement Enforceable Regulatory Frameworks and Corporate Accountability Establish legislative measures that mandate transparency for AI-generated election content, including clear disclosure and watermarking requirements. Crucially, clarify legal liability for generative AI developers and deployers to ensure they exercise reasonable care to prevent foreseeable harms, thereby curtailing the initial creation of malicious content. 2. Mandate Comprehensive Digital Literacy and Cognitive Resilience Initiatives Integrate robust AI and media literacy into educational curricula and launch targeted awareness campaigns. These initiatives must empower the public with critical thinking skills and the capability to verify information and resist manipulative narratives, thereby constricting user engagement with disinformation. 3. Foster International Cooperation and Harmonization of AI Governance Develop and implement international cooperation mechanisms and common regulatory standards to address the cross-border and scalable nature of AI-enabled influence campaigns, ensuring a resilient global information ecosystem that constrains the dissemination of disinformation.