Back to the MIT repository
3. Misinformation2 - Post-deployment

Disseminating false or misleading information

Where a LM prediction causes a false belief in a user, this may threaten personal autonomy and even pose downstream AI safety risks [99].

Source: MIT AI Risk Repositorymit214

ENTITY

2 - AI

INTENT

2 - Unintentional

TIMING

2 - Post-deployment

Risk ID

mit214

Domain lineage

3. Misinformation

74 mapped risks

3.1 > False or misleading information

Mitigation strategy

1. Prioritize the implementation of Retrieval-Augmented Generation RAG architectures and targeted model fine-tuning to enhance factual grounding and logical consistency, thereby minimizing the model's propensity for generating misinformed or hallucinatory content. 2. Establish multi-layered validation and contextualization mechanisms, including the use of automated cross-verification systems, fact-check labels, and provenance cues, to flag and limit user exposure to potentially false or misleading outputs. 3. Develop and integrate mandatory user training and media literacy initiatives, utilizing methods such as 'inoculation games' and general awareness campaigns, to cultivate critical thinking and reduce user susceptibility to believing and sharing misinformation generated by the LM.

ADDITIONAL EVIDENCE

It can also increase a person’s confidence in an unfounded opinion, and in this way increase polarisation.