Back to the MIT repository
7. AI System Safety, Failures, & Limitations2 - Post-deployment

AI Systems interacting with brittle environments

Deployed AI systems can rely on physical sensors and data sources that may exhibit hardware drift and thus data distribution drift over time. This distribu- tion drift may affect system robustness and performance. This usually involves AI systems working in undigitized and physical environments.

Source: MIT AI Risk Repositorymit1172

ENTITY

2 - AI

INTENT

2 - Unintentional

TIMING

2 - Post-deployment

Risk ID

mit1172

Domain lineage

7. AI System Safety, Failures, & Limitations

375 mapped risks

7.3 > Lack of capability or robustness

Mitigation strategy

1. Establish Continuous, Real-Time Monitoring and Alerting Systems Implement a robust Machine Learning Operations (MLOps) framework for continuous monitoring of deployed AI systems. This must include real-time statistical analysis to detect data distribution drift (e.g., covariate shift in sensor readings or feature values) against the training baseline, and performance tracking (e.g., accuracy, F1 score) to identify model degradation. Tiered alerts must be integrated to notify engineering and domain expert teams upon exceeding predetermined drift thresholds, enabling timely intervention before critical performance loss. 2. Enhance Data Acquisition and Hardware Resilience in Brittle Environments Address the root cause of "hardware drift" by implementing strategies focused on data quality and sensor integrity. Deploy ruggedized and redundant physical sensors designed to withstand the undigitized environment's specific challenges (e.g., temperature, vibration, moisture). Incorporate rigorous data validation and anomaly detection checks within the ingestion pipeline to immediately flag potential sensor failures, calibration issues, or noise that would otherwise manifest as system-wide distribution drift. 3. Implement Adaptive and Robust Model Training Strategies Develop a systematic approach for model adaptation. This includes establishing automated, triggered retraining pipelines that incorporate new, validated data representing the current distribution of the physical environment. Proactively enhance model resilience through robust training techniques, such as data augmentation, which exposes the model to a wide range of simulated sensor noise and environmental variations. Consider deploying adaptive learning techniques, like online or incremental learning, to allow the model to continuously adjust its parameters to gradual, non-critical drift without requiring full batch retraining.