Back to the MIT repository
3. Misinformation2 - Post-deployment

Disseminating false or misleading information

Predicting misleading or false information can misinform or deceive people. Where a LM prediction causes a false belief in a user, this may be best understood as ‘deception’10, threatening personal autonomy and potentially posing downstream AI safety risks (Kenton et al., 2021), for example in cases where humans overestimate the capabilities of LMs (Anthropomorphising systems can lead to overreliance or unsafe use). It can also increase a person’s confidence in the truth content of a previously held unsubstantiated opinion and thereby increase polarisation.

Source: MIT AI Risk Repositorymit241

ENTITY

2 - AI

INTENT

2 - Unintentional

TIMING

2 - Post-deployment

Risk ID

mit241

Domain lineage

3. Misinformation

74 mapped risks

3.1 > False or misleading information

Mitigation strategy

1. Implement and continuously evaluate technical detection frameworks utilizing computational and engineering methodologies, including AI components, to provide real-time, comprehensive, and explainable detection measures for content falsity, manipulation traces, and propagation patterns in digital environments. 2. Develop and broadly disseminate robust digital, media, and information literacy curricula across public and educational sectors to foster critical thinking, enhance source verification skills, and build comprehensive societal resilience against malicious information operations. 3. Establish legally and normatively compliant regulatory and governance frameworks that balance the necessity of counteracting harmful information with adherence to international human rights standards (e.g., freedom of expression), incorporating mechanisms for information correction, accurate data provision, and carefully tailored legal sanctions where legitimate interests are threatened.

ADDITIONAL EVIDENCE

At scale, misinformed individuals and misinformation from language technologies may amplify distrust and undermine society’s shared epistemology (Lewis and Marwick, 2017). Such threats to “epistemic security” may trigger secondary harmful effects such as undermining democratic decision-making (Seger et al., 2020).