Illegitimate surveillance and censorship
Anticipated risk: Mass surveillance previously required millions of human analysts [83], but is increasingly being automated using machine learning tools [7, 168]. The collection and analysis of large amounts of information about people creates concerns about privacy rights and democratic values [41, 173,187]. Conceivably, LMs could be applied to reduce the cost and increase the efficacy of mass surveillance, thereby amplifying the capabilities of actors who conduct mass surveillance, including for illegitimate censorship or to cause other harm.
ENTITY
1 - Human
INTENT
1 - Intentional
TIMING
2 - Post-deployment
Risk ID
mit220
Domain lineage
4. Malicious Actors & Misuse
4.1 > Disinformation, surveillance, and influence at scale
Mitigation strategy
1. Implement stringent, rights-based governance frameworks (lawfulness, necessity, proportionality) for all LLM applications deployed in surveillance or censorship contexts, ensuring that judicial or independent oversight is mandatory before any mass, indiscriminate data collection or analysis is authorized. 2. Mandate the integration of privacy-enhancing technologies (PETs), such as cryptographic techniques (e.g., Zero-Knowledge Proofs, Private Set Intersection), into LLM architectures to enable detection of illegal content or threats while minimizing exposure and retention of protected personal or non-pertinent data. 3. Establish requirements for mandatory, public, and routine human rights and algorithmic impact assessments (ARIAs) for all LLM-powered surveillance systems to ensure transparency, identify discriminatory effects, and provide accessible, effective redress mechanisms for individuals harmed by opaque automated decisions or illegitimate censorship.