Back to the MIT repository
7. AI System Safety, Failures, & Limitations2 - Post-deployment

Incomplete advice

When a model provides advice without having enough information, resulting in possible harm if the advice is followed.

Source: MIT AI Risk Repositorymit1304

ENTITY

2 - AI

INTENT

2 - Unintentional

TIMING

2 - Post-deployment

Risk ID

mit1304

Domain lineage

7. AI System Safety, Failures, & Limitations

375 mapped risks

7.3 > Lack of capability or robustness

Mitigation strategy

1. Implementation of Granular Transparency and Verification Mechanisms Deploy user-facing controls, such as uncertainty scoring, source attribution, or cognitive forcing functions, to explicitly signal when the model's advice is potentially incomplete, high-risk, or based on limited information. This strategy is critical for mitigating user overreliance by mandating critical human evaluation for high-impact outputs and creating a realistic mental model of the AI's limitations. 2. Enhancement of Factual Grounding and Robustness Technically mitigate content incompleteness by integrating Retrieval-Augmented Generation (RAG) frameworks grounded in curated, verified knowledge bases. Concurrently, apply instruction-based safety tuning (e.g., Chain-of-Thought prompting) to ensure the generated advice is systematically derived from sufficient, accurate, and context-specific data, thereby reducing reliance on the model's generalized training set. 3. Mandatory Pre-deployment Risk Assessment and Continuous Performance Auditing Establish formal governance protocols requiring adversarial testing and specialized quality assurance checks prior to deployment to empirically characterize the model's limitations regarding completeness and accuracy across diverse, high-risk scenarios. This process must be coupled with continuous, real-time monitoring of model outputs to detect and address emerging patterns of incomplete or harmful advice post-deployment.