Back to the MIT repository
7. AI System Safety, Failures, & Limitations2 - Post-deployment

Complexity of the Intended Task and Usage Environment

As a general rule, more complex environments can quickly lead to situations that had not been considered in the design phase of the AI system. Therefore, complex environments can introduce risks with respect to the reliability and safety of an AI system

Source: MIT AI Risk Repositorymit182

ENTITY

2 - AI

INTENT

2 - Unintentional

TIMING

2 - Post-deployment

Risk ID

mit182

Domain lineage

7. AI System Safety, Failures, & Limitations

375 mapped risks

7.3 > Lack of capability or robustness

Mitigation strategy

1. **Implement a Continuous Monitoring System (CMS) for Real-Time Context:** Establish a system to continuously monitor the AI's performance, real-time configuration, and operational environment. This measure is critical for identifying anomalies, new vulnerabilities, and environmental deviations not anticipated during the design phase, thus ensuring timely updates to risk models. 2. **Design for Resilience and Safe System Degradation:** Architect the AI system with the explicit assumption that failures will occur in complex environments. Implement mechanisms for graceful degradation and pre-defined safe-failure modes to limit the scope of potential harm when an unconsidered situation arises. 3. **Establish Strong Human Oversight and Override Capabilities:** For mission-critical decisions, integrate human-in-the-loop protocols. Ensure operators are provided with clear, accessible controls to override autonomous AI actions and that reliable, always-available emergency stop mechanisms are in place to address unforeseen, high-risk operational scenarios.