Back to the MIT repository
6. Socioeconomic and Environmental3 - Other

Complex attribution and responsibility

When multiple actors are involved in AI development and deployment, it becomes difficult to assign responsibility for harm, complicating accountability.

Source: MIT AI Risk Repositorymit1058

ENTITY

1 - Human

INTENT

2 - Unintentional

TIMING

3 - Other

Risk ID

mit1058

Domain lineage

6. Socioeconomic and Environmental

262 mapped risks

6.5 > Governance failure

Mitigation strategy

1. Establish Robust Governance Frameworks and Clear Accountability Structures Define and formally assign specific roles, responsibilities, and lines of accountability across all actors in the AI value chain (designer, developer, integrator, operator/end-user). This must include establishing legal and contractual terms that govern the distribution of liability for AI-induced harm to mitigate the risk of the "liability sink" effect. 2. Implement Continuous Monitoring and Comprehensive Auditability Deploy continuous monitoring systems to track real-time model performance, data quality, and security indicators, as AI systems evolve post-deployment. Crucially, maintain immutable decision records and detailed documentation throughout the AI lifecycle to ensure full forensic traceability of outputs, human interventions, and system changes for post-incident attribution. 3. Prioritize Explainability and Transparency (XAI) Design and deploy AI systems that utilize Explainable AI (XAI) methods to ensure model decisions and reasoning are interpretable and understandable to human operators and regulators. Transparently disclose the purpose, capabilities, limitations, and foreseeable risks of the AI system to all users, directly addressing the "black box" problem that complicates fault determination.