Back to the MIT repository
3. Misinformation2 - Post-deployment

Eroded epistemics

Strong AI may... enable personally customized disinformation campaigns at scale... AI itself could generate highly persuasive arguments that invoke primal human responses and inflame crowds... d undermine collective decision-making, radicalize individuals, derail moral progress, or erode consensus reality

Source: MIT AI Risk Repositorymit571

ENTITY

2 - AI

INTENT

3 - Other

TIMING

2 - Post-deployment

Risk ID

mit571

Domain lineage

3. Misinformation

74 mapped risks

3.2 > Pollution of information ecosystem and loss of consensus reality

Mitigation strategy

1. Establish a robust AI governance strategy and data integrity framework that mandates the use of high-quality, verified training data, along with continuous auditing and testing throughout the AI lifecycle, to mitigate the generation of inaccurate or fabricated information at the source. 2. Implement technological content provenance and authenticity infrastructure, such as cryptographic signing and content credentials, to establish an auditable chain-of-custody for high-stakes media, enabling users and institutions to verify the authenticity and origin of information. 3. Invest in comprehensive digital literacy and cognitive resilience campaigns focused on educating users to critically evaluate AI-generated content, discern factual information, and utilize human judgment and oversight as a final checkpoint against sophisticated disinformation, thereby bolstering societal defenses against the erosion of consensus reality.