Back to the MIT repository
7. AI System Safety, Failures, & Limitations3 - Other

Unreliability in corner cases

AI systems tend to show unreliable behavior when confronted with rare or ambiguous input data, also called corner cases. Therefore, the controlled behavior is required whenever the AI system is faces a corner case.

Source: MIT AI Risk Repositorymit1011

ENTITY

2 - AI

INTENT

2 - Unintentional

TIMING

3 - Other

Risk ID

mit1011

Domain lineage

7. AI System Safety, Failures, & Limitations

375 mapped risks

7.3 > Lack of capability or robustness

Mitigation strategy

- Proactive Identification and Dataset Augmentation: Implement advanced AI techniques, such as variational autoencoders (VAEs) or targeted data synthesis, to automatically identify, generate, and incorporate critical, rare, or ambiguous corner cases into the training and validation datasets to maximize the model's robustness and generalization capabilities prior to deployment. - Real-Time Anomaly Detection and Human-in-the-Loop Control: Integrate real-time monitoring with probabilistic alarm systems that detect when the AI's input data or operational context deviates significantly into an undefined corner-case space, enabling the system to proactively signal the anomaly and mandate a transfer of control to a trained human operator for review and intervention. - Continuous Adversarial Testing and Iterative Refinement: Conduct continuous, systematic stress-testing and adversarial red-teaming exercises to expose residual or newly emergent corner-case vulnerabilities in the deployed system, utilizing the discovered examples for iterative model retraining and refinement to prevent future unreliability.