Back to the MIT repository
4. Malicious Actors & Misuse2 - Post-deployment

Institutional trust loss

Erosion of trust in public institutions and weakened checks and balances due to mis/disinformation, influence operations, or real or perceived misuse of generative AI

Source: MIT AI Risk Repositorymit1341

ENTITY

1 - Human

INTENT

3 - Other

TIMING

2 - Post-deployment

Risk ID

mit1341

Domain lineage

4. Malicious Actors & Misuse

223 mapped risks

4.1 > Disinformation, surveillance, and influence at scale

Mitigation strategy

1. Establish comprehensive AI Governance Frameworks, including mandatory, independent AI Impact Assessments (AIIAs) and continuous model monitoring, to ensure systems align with ethical principles (fairness, accountability, and transparency) and democratic values. 2. Mandate and implement interoperable technical standards, such as digital provenance mechanisms (e.g., cryptographic watermarks), for all publicly disseminated generative AI outputs to enable rapid and transparent identification of synthetic media and to mitigate foreign information manipulation and interference (FIMI). 3. Adopt Explainable AI (XAI) principles to provide accessible, human-readable explanations of AI-driven decisions, particularly in high-stakes public sector applications, and establish clear audit trails and mechanisms for public scrutiny and challenge of algorithmic outcomes.