Back to the MIT repository
3. Misinformation2 - Post-deployment

Information degradation

Information degradation - Creation or spread of false, hallucinatory, low-quality, misleading, or inaccurate information that degrades the information ecosystem and causes people to develop false or inaccurate perceptions, decisions and beliefs; or to lose trust in accurate information.

Source: MIT AI Risk Repositorymit970

ENTITY

2 - AI

INTENT

3 - Other

TIMING

2 - Post-deployment

Risk ID

mit970

Domain lineage

3. Misinformation

74 mapped risks

3.2 > Pollution of information ecosystem and loss of consensus reality

Mitigation strategy

1. Mandate high-fidelity data governance and continuous model validation. Implement rigorous data quality and integrity measures across the entire AI lifecycle, ensuring training datasets are verified, complete, and unbiased. Employ continuous adversarial testing (red-teaming) and post-deployment monitoring—including using AI to validate other AI outputs—to proactively detect and remediate the generation of false, hallucinatory, or misleading information. 2. Institute a robust human-in-the-loop and accountability framework. Establish mandatory human oversight for reviewing and validating high-impact or sensitive AI-generated content before public dissemination, supplementing automated filtering mechanisms. Maintain comprehensive audit trails and traceability records of data provenance, model decisions, and human interventions to ensure clear accountability for systemic errors or misinformation events. 3. Enhance algorithmic transparency and promote user literacy. Deploy Explainable AI (XAI) methodologies to provide stakeholders with accessible insights into the model's output logic and reasoning, allowing for critical assessment of potential inaccuracies or biases. Concurrently, launch comprehensive digital literacy programs to educate employees and end-users on identifying, verifying, and mitigating the spread of deepfakes and other forms of AI-driven misinformation.