Diminishing societal trust due to disinformation or manipulation
The use of GPAIs may contribute to the proliferation of either deliberate dis- information or unintended misinformation can severely erode trust in public figures and democratic institutions. This diminishing trust can extend to other forms of media, making the public less informed.
ENTITY
1 - Human
INTENT
3 - Other
TIMING
2 - Post-deployment
Risk ID
mit1184
Domain lineage
4. Malicious Actors & Misuse
4.1 > Disinformation, surveillance, and influence at scale
Mitigation strategy
1. Mandate Rigorous and Independent Pre-deployment Governance and Audits Implement a formal Governance and Oversight Controls framework (Source 13, 17) requiring mandatory third-party pre-deployment model audits and comprehensive risk assessments (Source 18) to systematically identify, evaluate, and mitigate potential proliferation pathways for misinformation and disinformation before General Purpose AI Systems (GPAIS) are deployed. 2. Establish Comprehensive Transparency and Provenance Mechanisms Require the implementation of Explainable AI (XAI) systems to ensure the technical processes and reasoning behind AI outputs are accessible to stakeholders (Source 8). Additionally, mandate the consistent application of provenance cues (metadata) for all AI-generated content (Source 9) to allow for source traceability and enable users to assess content authenticity and credibility, thereby rebuilding public trust (Source 7). 3. Invest in Scalable Digital Literacy and Cognitive Resilience Campaigns Fund and deploy sustained, multi-stakeholder initiatives, including media literacy courses, inoculation games, and public awareness campaigns (Source 2, 10). These interventions, framed within the 'Prepare' stage of misinformation response (Source 2), are designed to foster cognitive resilience and equip users with the skills necessary to critically evaluate content, thereby reducing societal susceptibility to manipulation and the subsequent erosion of trust.