Back to the MIT repository
3. Misinformation2 - Post-deployment

Pollution of information ecosystem

Contaminating publicly available information with false or inaccurate information

Source: MIT AI Risk Repositorymit265

ENTITY

2 - AI

INTENT

3 - Other

TIMING

2 - Post-deployment

Risk ID

mit265

Domain lineage

3. Misinformation

74 mapped risks

3.2 > Pollution of information ecosystem and loss of consensus reality

Mitigation strategy

1. Implement Robust Technical and Algorithmic Controls to proactively identify, flag, and limit the dissemination of synthetic, deceptive, or factually inaccurate content at scale. This includes continuous monitoring of the information ecosystem for anomalous patterns, employing advanced machine learning for anomaly detection, and applying security measures such as input sanitization and output filtering on generative models to prevent the leakage of harmful outputs. 2. Establish and Enforce Transparency and Vetted Information Infrastructure through rigorous, independent fact-checking protocols and the widespread use of content-verification tools and credibility labelling. This systemic effort ensures timely debunking of misinformation and guides users toward verified, high-quality evidence sources, thereby mitigating the erosion of public trust and consensus reality. 3. Mandate Comprehensive Information and Media Literacy Programs to cultivate critical assessment skills in end-users. Educational initiatives are essential for building societal resilience by enabling citizens to independently evaluate the veracity, source reliability, and potential bias of digital information, fostering a culture of evidence-based discourse.

ADDITIONAL EVIDENCE

Example: Digital commons (e.g. Wikimedia) becoming replete with synthetic or factually inaccurate content (Huang and Siddarth, 2023)±