Back to the MIT repository
4. Malicious Actors & Misuse2 - Post-deployment

Developments in AI enable actors to undermine democratic processes

Developments in AI are giving companies and governments more control over individuals’ lives than ever before, and may possibly be used to undermine democratic processes. We are already seeing how the collection of large amounts of personal data can be used to surveil and influence populations, for example the use of facial recognition technology to surveil Uighur and other minority populations in China [66]. Further advances in language modelling could also be used to develop tools that can effectively persuade people of certain claims [42].

Source: MIT AI Risk Repositorymit898

ENTITY

1 - Human

INTENT

1 - Intentional

TIMING

2 - Post-deployment

Risk ID

mit898

Domain lineage

4. Malicious Actors & Misuse

223 mapped risks

4.1 > Disinformation, surveillance, and influence at scale

Mitigation strategy

1. Establish a mandatory, harmonized framework for AI system accountability and content provenance, requiring developers and platforms to implement watermarking and lineage disclosure for all AI-generated or curated election-related content, and to subject high-risk influence systems to independent, third-party algorithmic impact assessments and bias audits. 2. Implement strict, legally enforceable governance over high-risk AI applications such as facial recognition technology (FRT) in public spaces, including mandates for limiting the retention and sharing of collected biometric data, requiring clear public notification of its use, and enforcing stringent accuracy standards across all demographic cohorts. 3. Fund and integrate comprehensive civic and digital literacy programs into public education and deploy widespread awareness campaigns to foster cognitive resilience, enabling citizens to critically evaluate and identify AI-driven disinformation and the mechanisms of algorithmic influence that erode social and political trust.