Back to the MIT repository
7. AI System Safety, Failures, & Limitations3 - Other

Lack of robustness

Robustness characterizes the resilience of an AI system’s output against minor changes in the input domain. A great variation in an AI system’s response to small input changes indicates unreliable outputs.

Source: MIT AI Risk Repositorymit1012

ENTITY

2 - AI

INTENT

2 - Unintentional

TIMING

3 - Other

Risk ID

mit1012

Domain lineage

7. AI System Safety, Failures, & Limitations

375 mapped risks

7.3 > Lack of capability or robustness

Mitigation strategy

1. Implement **Adversarial Training and Red Teaming** to enhance model resilience against intentional and subtle perturbations. This involves actively exposing the AI system to adversarial examples during training (in-processing strategies) and conducting red-teaming exercises to simulate real-world attacks that exploit latent vulnerabilities and circumvent safety mechanisms. 2. Establish **Continuous Robustness and Performance Monitoring** with real-time anomaly detection. This ensures consistent performance in operational environments by tracking inputs and outputs for data drift, concept shift, and unexpected behavior, and includes mechanisms for automated detection of corrupted or biased datasets. 3. Enforce **Rigorous Data Quality Controls and Data Augmentation** throughout the AI lifecycle. This includes continuous validation pipelines to verify the integrity and quality of training data and the strategic application of data augmentation techniques to broaden the input domain and improve the model's ability to generalize invariant features across varied conditions.