Authoritarian Surveillance, Censorship, and Use: Authoritarian Surveillance and Targeting of Citizens
Authoritarian governments could misuse AI to improve the efficacy of repressive domestic surveillance campaigns. Malicious actors will recognize the power of AI targeting tools. AI-powered analytics have transformed the relationship between companies and consumers, and they are now doing the same for governments and individuals. The broad circulation of personal data drives commercial innovation, but it also creates vulnerabilities and the risk of misuse. For example, AI assistants can be used to identify and target individuals for surveillance or harassment. They may also be used to manipulate people’s behavior, such as by microtargeting them with political ads or fake news. In the wrong hands, advanced AI assistants with multimodal and external tool-use capabilities can be powerful targeting tools for oppression and control. The broad circulation of personal data cuts in both directions. On the one hand, it drives commercial innovation and can make our lives more convenient. On the other hand, it creates vulnerabilities and the risk of misuse. Without the proper policies and technical security and privacy mechanisms in place, malicious actors can exploit advanced AI assistants to harvest data on companies, individuals, and governments. There have already been reported incidents of nation-states combining widely available commercial data with data acquired illicitly to track, manipulate, and coerce individuals. Advanced AI assistants can exacerbate these misuse risks by allowing malicious actors to more easily link disparate multimodal data sources at scale and exploit the ‘digital exhaust’ of personally identifiable information (PII) produced as a byproduct of modern life.
ENTITY
1 - Human
INTENT
1 - Intentional
TIMING
2 - Post-deployment
Risk ID
mit389
Domain lineage
4. Malicious Actors & Misuse
4.1 > Disinformation, surveillance, and influence at scale
Mitigation strategy
1. Implement comprehensive federal-level data protection and privacy laws, including data minimization requirements for corporate entities and mandating warrants (demonstrating probable cause) for government access to personal data, thereby updating outdated legal doctrines such as the third-party doctrine to reflect the modern information age. 2. Establish strict procurement and export controls requiring all AI systems, particularly those used by government or sold globally, to enshrine human rights and civil liberties as core design principles, specifically banning undisclosed political censorship and restricting the export of surveillance and opinion-shaping technologies. 3. Mandate and enforce continuous AI Security Compliance programs, aligned with frameworks like the NIST AI Risk Management Framework, for all governmental and high-risk AI deployments, emphasizing stringent access controls (least-privilege/zero-trust) and continuous monitoring of model behavior and data integrity to prevent unauthorized data exploitation.