Information
Large-scale influence on communication and information systems, and epistemic processes more generally.
ENTITY
3 - Other
INTENT
3 - Other
TIMING
3 - Other
Risk ID
mit1040
Domain lineage
4. Malicious Actors & Misuse
4.1 > Disinformation, surveillance, and influence at scale
Mitigation strategy
1. Mandate Continuous Independent Oversight and Systemic Risk Assessment Require providers of general-purpose AI (GPAI) models to perform and publish continuous systemic risk assessments and mitigation, including rigorous third-party pre-deployment audits and red-teaming. These evaluations must specifically analyze the model's capacity for generating large-scale disinformation, enabling mass influence operations, and eroding epistemic practices across diverse social and linguistic contexts, as measures are still largely underexplored. 2. Enforce Algorithmic Transparency and Output Provenance Establish mandatory, legally enforceable standards for algorithmic transparency, including the disclosure of model logic, and for the technical provenance of AI outputs, such as robust watermarking or cryptographic metadata. This aims to counter the blurring of truth and fabrication by allowing downstream providers and end-users to reliably assess the origin, potential biases, and integrity of AI-generated information, which is a key principle for effective mitigation. 3. Invest in Cognitive Resilience and Digital Literacy at Scale Prioritize substantial investment in public education and civic campaigns designed to foster cognitive resilience and advanced digital literacy. These programs should be integrated into curricula and platform guidelines to equip citizens with the critical skills necessary to identify and resist sophisticated, hyper-personalized, and algorithmically amplified manipulative narratives, thereby mitigating the systemic risk of widespread influence on democratic and communication processes.