Back to the MIT repository
3. Misinformation2 - Post-deployment

Worsened epistemic processes for society

Epistemic processes and problem solving: we currently see more reasons to be concerned about AI worsening society's epistemic processes than reasons to be optimistic about AI helping us better solve problems as a society. For example, increased use of content selection algorithms could drive epistemic insularity and a decline in trust in credible multipartisan sources, which reducing our ability to deal with important long-term threats and challenges such as pandemics and climate change.

Source: MIT AI Risk Repositorymit899

ENTITY

1 - Human

INTENT

2 - Unintentional

TIMING

2 - Post-deployment

Risk ID

mit899

Domain lineage

3. Misinformation

74 mapped risks

3.2 > Pollution of information ecosystem and loss of consensus reality

Mitigation strategy

1. Mandate Algorithmic Transparency and Auditability: Institute legislative and platform requirements for high-reach recommendation and content selection algorithms, granting independent, vetted researchers access and tools necessary to audit the mechanisms that determine information prominence and identify features driving epistemic insularity or filter-bubble formation. 2. Cultivate Cognitive and Digital Resilience: Implement sustained, cross-sectoral civic education initiatives to promote advanced digital literacy and cognitive resilience, thereby equipping information consumers with critical skills to evaluate source reliability and resist the persuasive effects of algorithmically amplified, unsupported claims. 3. Strengthen Trust Signal Mechanisms: Develop and integrate technological and institutional methods, such as content provenance standards and dynamic trust indicators, that reliably "signal boost" credible, multipartisan, and decision-relevant information to counteract the pollution of the information ecosystem and restore faith in trustworthy knowledge systems.