Real-world risks (inducing traditional economic and social security risks)
Hallucinations and erroneous decisions of models and algorithms, along with issues such as system performance degradation, interruption, and loss of control caused by improper use or external attacks, will pose security threats to users' personal safety, property, and socioeconomic security and stability.
ENTITY
3 - Other
INTENT
3 - Other
TIMING
2 - Post-deployment
Risk ID
mit699
Domain lineage
7. AI System Safety, Failures, & Limitations
7.3 > Lack of capability or robustness
Mitigation strategy
1. Implement Retrieval-Augmented Generation (RAG) Architecture: Architecturally ground the Large Language Model's inference outputs in a verified, enterprise-specific knowledge base. This significantly reduces the propensity for factual hallucination and erroneous decisions by prioritizing fact-based retrieval and synthesis over probabilistic generation, thereby improving output fidelity and trustworthiness. 2. Mandate Human-in-the-Loop (HITL) Oversight Protocols: Establish rigorous human oversight and verification by Subject Matter Experts (SMEs) for all high-stakes AI-generated outputs, particularly those affecting user safety, property, or legal/regulatory compliance. This operational control acts as a critical final defense mechanism against the acceptance and dissemination of incorrect or fabricated information. 3. Establish Continuous Monitoring and Adaptive Governance: Deploy real-time logging, audit trails, and performance monitoring across the AI system lifecycle. This practice detects anomalies, system degradation, and unexpected behavioral drifts, enabling rapid intervention and system recalibration (e.g., through adversarial testing/red-teaming) to maintain control and robustness against improper use or external attacks.