Lack of accountability and liability
Determining responsibility when EAI causes harm requires new accountability and liability frameworks that address the complexities of highly autonomous physical systems. Human users may disagree with decisions taken by expert EAI systems, raising significant questions of delegation and responsibility [108]. Lack of EAI accountability could lead to confusion for users and breakdowns in traditional justice systems [109].
ENTITY
3 - Other
INTENT
2 - Unintentional
TIMING
3 - Other
Risk ID
mit1432
Domain lineage
6. Socioeconomic and Environmental
6.5 > Governance failure
Mitigation strategy
1. Establish a comprehensive AI Accountability and Governance Framework (AIMS) that explicitly defines roles, responsibilities, and decision-making authority using tools like RACI matrices across the entire AI system lifecycle. This must be integrated with existing corporate legal and risk functions to clearly assign liability (e.g., developer accountability for design and data integrity, deployer accountability for local validation and governance) and ensure adherence to relevant legal requirements, such as the EU AI Act. 2. Enforce systematic transparency, traceability, and explainability standards to ensure the attribution of harm is technically feasible. This requires implementing mandatory documentation and audit trails, including version histories, time-stamped records of all key system outputs and human intervention/override decisions, and confidence/uncertainty indicators. This evidence standard ensures that internal review and external legal processes can reconstruct causality when an autonomous decision leads to an adverse outcome. 3. Implement continuous lifecycle governance that mandates proportionate human oversight and real-time monitoring for highly autonomous systems, particularly those operating in physical domains. This includes establishing a "human-in-the-loop" requirement for high-risk decisions and developing protocols for mandatory revalidation, drift monitoring, and incident reporting post-deployment to ensure the system's risk profile remains acceptable and aligned with intended-use boundaries.