Back to the MIT repository
3. Misinformation2 - Post-deployment

Misinformation

Non-embodied AIs are known to propagate misinformation [81, 82]. Various studies have shown that LLMs hallucinate information, including academic citations [83], clinical knowledge [84], and cultural references [85]. EAI systems inherit these shortcomings in the physical world, answering user questions with deceptive or incorrect information [86]. Because VLAs fuse vision and language, their hallucinatory failures can be spatially grounded—e.g., misidentifying an object in view and then generating a plausible yet unsafe action plan around it. And although automated home assistants like Amazon’s Alexa already lie about issues as innocuous as Santa Claus’ existence [87], more mobile, capable, and trusted EAI systems in sensitive positions (like home-assistant or community-service positions) could easily spread model developers’ propaganda and talking points to users.

Source: MIT AI Risk Repositorymit1427

ENTITY

2 - AI

INTENT

2 - Unintentional

TIMING

2 - Post-deployment

Risk ID

mit1427

Domain lineage

3. Misinformation

74 mapped risks

3.1 > False or misleading information

Mitigation strategy

1. Prioritize grounding the generation process in verifiable information through the implementation of **Retrieval-Augmented Generation (RAG)** architecture. This technique connects Vision-Language-Action (VLA) models to curated, high-quality external knowledge sources to significantly reduce factual inaccuracies and spatially grounded hallucinatory failures. 2. Mandate a framework for **algorithmic auditing and data inclusivity verification** to enhance the operational integrity of the systems. This involves rigorous bias audits and ensuring models are trained on diverse, balanced, and high-quality data to minimize the initial absorption of erroneous information and prevent culturally specific manipulation tactics. 3. Establish **human oversight control points** within the system's operational workflow. This Human-in-the-Loop process is essential for reviewing and fact-checking critical or high-stakes outputs (especially action plans) prior to execution to prevent deceptive or incorrect information from leading to unsafe physical actions.