Back to the MIT repository
7. AI System Safety, Failures, & Limitations2 - Post-deployment

Damage to critical infrastructure

The integration of AI systems within critical infrastructure, ranging from trans- portation to power systems, can cause substantial damage in cases of failure or malfunction. With the increasing number of Internet of Things (IoT) devices and interconnected cyber-physical systems, critical infrastructure becomes even more vulnerable [171, 174].

Source: MIT AI Risk Repositorymit1169

ENTITY

2 - AI

INTENT

2 - Unintentional

TIMING

2 - Post-deployment

Risk ID

mit1169

Domain lineage

7. AI System Safety, Failures, & Limitations

375 mapped risks

7.3 > Lack of capability or robustness

Mitigation strategy

1. Prioritize a Security- and Safety-by-Design Framework Mandate the application of rigorous AI risk management frameworks, such as the NIST AI RMF, across the entire system lifecycle (planning, development, deployment). This includes pre-deployment adversarial testing, comprehensive safety verification, and building in fault tolerance to ensure system robustness against both unintended failure modes and targeted cyber-physical attacks before integration into critical operational technology (OT) environments. 2. Implement Real-time, AI-Driven Anomaly Detection and Automated Response Deploy hybrid AI-driven cybersecurity and resilience platforms that continuously monitor interconnected cyber-physical systems and IoT endpoints. The primary function must be real-time anomaly detection—identifying deviations in network traffic, data provenance, and device behavior—to preemptively contain faults, prevent data poisoning, and trigger automated containment and remediation actions without human latency. 3. Establish Transparent Human-in-the-Loop Governance Maintain essential human agency and oversight over AI systems that oversee or control critical infrastructure functions. This necessitates the use of transparent AI agents (Explainable AI or XAI) and clearly defined human-in-the-loop protocols for high-stakes decisions, ensuring operators retain the necessary authority and ability to override or safely disengage the AI to mitigate cascading failures.