Back to the MIT repository
7. AI System Safety, Failures, & Limitations3 - Other

Accidents

As general purpose AI models as “black-box” models are not fully controllable and understandable, even to their developers, unexpected failures could arise from their unreliability. This could lead to accidents106 if they are connected to any real-world systems, during their development, testing or deployment.

Source: MIT AI Risk Repositorymit839

ENTITY

2 - AI

INTENT

2 - Unintentional

TIMING

3 - Other

Risk ID

mit839

Domain lineage

7. AI System Safety, Failures, & Limitations

375 mapped risks

7.3 > Lack of capability or robustness

Mitigation strategy

1. Mandate independent, third-party pre-deployment audits and adversarial testing to rigorously validate the model's robustness, safety, and reliability against unexpected failures and biases before connection to real-world systems. 2. Integrate Explainable AI (XAI) frameworks (e.g., LIME, SHAP, saliency mapping) to ensure decision-making processes are transparent and fully auditable, thereby allowing developers and regulators to understand the underlying logic and identify root causes of unreliability. 3. Establish a "Human-in-the-Loop" governance structure, requiring expert human review and authorization for all high-consequence AI-assisted decisions to ensure continuous operational control and mitigate risks arising from unpredictable emergent behavior.