Back to the MIT repository
7. AI System Safety, Failures, & Limitations2 - Post-deployment

Reliability

Reliability is defined as the probability that the system performs satisfactorily for a given period of time under stated conditions.

Source: MIT AI Risk Repositorymit642

ENTITY

2 - AI

INTENT

2 - Unintentional

TIMING

2 - Post-deployment

Risk ID

mit642

Domain lineage

7. AI System Safety, Failures, & Limitations

375 mapped risks

7.3 > Lack of capability or robustness

Mitigation strategy

1. Implement systematic adversarial testing and red teaming throughout the development lifecycle to stress-test the model against manipulated inputs and extreme, out-of-distribution scenarios, thereby strengthening resilience against model-spoofing and performance degradation. 2. Ensure model reliability by validating training data integrity, diversity, and representativeness, employing techniques such as data augmentation and continuous validation pipelines to mitigate data distribution shifts and inherent biases that undermine generalizability and capability. 3. Establish continuous monitoring mechanisms post-deployment to track model performance and accuracy, identify signs of concept drift or decay, and trigger timely automated or human-in-the-loop retraining and maintenance to ensure sustained operational effectiveness under evolving real-world conditions.