Back to the MIT repository
7. AI System Safety, Failures, & Limitations2 - Post-deployment

Reliability issues

Relying on general-purpose AI products that fail to fulfil their intended function can lead to harm. For example, general- purpose AI systems can make up facts (‘hallucination’), generate erroneous computer code, or provide inaccurate medical information. This can lead to physical and psychological harms to consumers and reputational, financial and legal harms to individuals and organisations.

Source: MIT AI Risk Repositorymit1024

ENTITY

1 - Human

INTENT

2 - Unintentional

TIMING

2 - Post-deployment

Risk ID

mit1024

Domain lineage

7. AI System Safety, Failures, & Limitations

375 mapped risks

7.3 > Lack of capability or robustness

Mitigation strategy

1. Implement Retrieval-Augmented Generation (RAG) to ground large language model (LLM) outputs in verified external knowledge sources, coupled with confidence thresholds and human-in-the-loop verification for critical or high-stakes outputs. 2. Establish continuous, dedicated AI Observability to monitor for data and concept drift, model decay, and anomalous response patterns, enabling early detection and automated intervention to maintain performance and factual accuracy in the production environment. 3. Design and integrate a systematic AI Risk Management Framework (e.g., based on NIST AI RMF or the EU AI Act) across the system lifecycle, including rigorous adversarial testing, robust cybersecurity protection, and clear scope definition to prevent unintended function execution and systemic failure.