Back to the MIT repository
4. Malicious Actors & Misuse2 - Post-deployment

Manipulation

The 2016 scandal involving Cambridge Analytica is the most infamous example where people's data was crawled from Facebook and analytics were then provided to target these people with manipulative content for political purposes.While it may not have been AI per se, it is based on similar data and it is easy to see how AI would make this more effective

Source: MIT AI Risk Repositorymit93

ENTITY

1 - Human

INTENT

1 - Intentional

TIMING

2 - Post-deployment

Risk ID

mit93

Domain lineage

4. Malicious Actors & Misuse

223 mapped risks

4.1 > Disinformation, surveillance, and influence at scale

Mitigation strategy

1. Implement a robust privacy-by-design architecture that enforces the principles of data minimization and purpose limitation. This must include instituting requirements for meaningful, informed, and affirmative consent for all data processing activities, particularly those involving psychographic profiling or political targeting, and establishing continuous audit rights over third-party data access and usage. 2. Establish rigorous AI governance frameworks, including the deployment of continuous adversarial testing and red teaming methodologies, to proactively assess the model's susceptibility to being used for generating or amplifying disinformation, deepfakes, and other forms of manipulative content at scale. 3. Mandate comprehensive transparency and accountability for all high-risk AI systems. This includes maintaining immutable, accessible audit trails and logs of system behaviors, model inputs, and decisions, and investing in societal resilience through public media literacy and digital education programs to counteract the effects of information manipulation.