Back to the MIT repository
7. AI System Safety, Failures, & Limitations3 - Other

Accidents

Accidents include unintended failure modes that, in principle, could be considered the fault of the system or the developer

Source: MIT AI Risk Repositorymit99

ENTITY

3 - Other

INTENT

2 - Unintentional

TIMING

3 - Other

Risk ID

mit99

Domain lineage

7. AI System Safety, Failures, & Limitations

375 mapped risks

7.3 > Lack of capability or robustness

Mitigation strategy

1. **Implement Continuous AI Red Teaming and Adversarial Stress Testing.** Systematically subject the AI model to simulated real-world attack conditions, including adversarial prompts, edge-case data drift, and out-of-distribution (OOD) inputs, to proactively uncover latent vulnerabilities and brittleness that are missed by standard validation processes, directly shoring up system capability and resilience. 2. **Enforce Robustness via Advanced Training and Data Governance.** Integrate regularization techniques (e.g., L2 Regularization, Dropout) during model training to prevent overfitting and reliance on spurious correlations, coupled with continuous data validation and augmentation pipelines to ensure the model learns generalizable, invariant features across diverse, high-integrity datasets. 3. **Establish Real-Time Anomaly Detection and Continuous Monitoring.** Deploy automated systems to scan the AI's runtime environment and outputs 24/7 for operational anomalies, unexpected deviations, and early signs of degraded performance, enabling a shift from reactive incident fixes to proactive intervention before minor failures escalate into significant accidents.