Back to the MIT repository
4. Malicious Actors & Misuse2 - Post-deployment

Politically motivated misuse

General purpose AI models could exacerbate existing tactics for political destabilisation, such as disinformation campaigns, and surveillance efforts if misused for political motivations. The technological advancements in text and media generation of general purpose AI models could refine disinformation164 attempts to shape and polarise public opinion or influence important political events.165 The improved automated processing of text, audio, image, and video could be used for surveillance measures and exacerbate human right violations and repression of political oppositions.

Source: MIT AI Risk Repositorymit843

ENTITY

1 - Human

INTENT

1 - Intentional

TIMING

2 - Post-deployment

Risk ID

mit843

Domain lineage

4. Malicious Actors & Misuse

223 mapped risks

4.1 > Disinformation, surveillance, and influence at scale

Mitigation strategy

1. Establish comprehensive AI governance frameworks that mandate bias audits, diverse dataset requirements, and algorithmic impact assessments for politically-relevant, high-risk systems to ensure fairness and prevent misuse for surveillance or repression. 2. Develop and enforce interoperable technical standards for content provenance and mandatory labeling of all AI-generated text, audio, and video to enhance transparency and support the detection of synthetic media disinformation. 3. Implement sustained, multi-stakeholder public education campaigns focused on digital literacy and cognitive resilience to inoculate populations against the effects of AI-driven political misinformation and manipulation.