Back to the MIT repository
4. Malicious Actors & Misuse2 - Post-deployment

Informational and Communicational AI Risks

Informational and communicational AI risks refer particularly to informational manipulation through AI systems that influence the provision of information (Rahwan, 2018; Wirtz & Müller, 2019), AIbased disinformation and computational propaganda, as well as targeted censorship through AI systems that use respectively modified algorithms, and thus restrict freedom of speech.

Source: MIT AI Risk Repositorymit292

ENTITY

3 - Other

INTENT

1 - Intentional

TIMING

2 - Post-deployment

Risk ID

mit292

Domain lineage

4. Malicious Actors & Misuse

223 mapped risks

4.1 > Disinformation, surveillance, and influence at scale

Mitigation strategy

1. Establish Comprehensive AI Governance and Accountability Frameworks: Mandate the adoption of established AI risk management frameworks (e.g., NIST AI RMF, ISO/IEC 42001) to embed principles of fairness, transparency, and accountability into the design and deployment of informational AI systems, ensuring clear oversight and defined responsibility for system outcomes. 2. Ensure Algorithmic Integrity and Data Provenance: Employ rigorous data processing techniques, including bias audits and the use of high-quality, verified training datasets, alongside adversarial testing and continuous security monitoring to maintain model robustness against tampering and to actively detect AI-generated disinformation (e.g., deepfakes, hallucinations). 3. Implement Explainable AI (XAI) and Human-in-the-Loop Oversight: Integrate XAI techniques to provide technical and stakeholder clarity on algorithmic decision-making, particularly in content moderation. This must be complemented by robust human oversight mechanisms and accessible appeal processes to safeguard against targeted censorship and to validate the accuracy of critical AI outputs.