Unclear attribution from AI component interactions
Interactions between different AI components can cause harm, but it may be difficult to pinpoint which components are the cause.
ENTITY
2 - AI
INTENT
2 - Unintentional
TIMING
3 - Other
Risk ID
mit1089
Domain lineage
7. AI System Safety, Failures, & Limitations
7.3 > Lack of capability or robustness
Mitigation strategy
1. Implement Comprehensive, Granular Audit Trails and Component Lineage Tracking Establish high-resolution logging across all integrated AI services and their interfaces, including recording inputs, intermediary data transfers, model versions, and outputs for every component interaction. This forensic capability is essential to isolate the specific component or interface failure that precipitated the systemic harm, thereby enabling clear post-hoc technical and legal attribution. 2. Develop and Enforce Strict Model Interdependence Mapping and Decoupling Conduct systematic architectural analysis to visualize and control the data flows and dependencies between all AI components. Prioritize the use of modular design principles and robust API contracts to minimize tight coupling and prevent cascading failures. Where critical interdependencies exist, deploy isolation mechanisms or bulkheads to limit the blast radius of a single component's failure. 3. Establish Real-Time, Cross-Component Anomaly Detection and Monitoring Deploy a unified observability platform that monitors the statistical distribution of inputs and outputs, latency, and operational health for each component individually and the system holistically. The primary focus should be on detecting significant deviations in interaction patterns that precede or coincide with a system-level failure, facilitating the immediate isolation and diagnosis of the problematic component.