Back to the MIT repository
7. AI System Safety, Failures, & Limitations3 - Other

Attributing the responsibility for AI's failures

This section, constituting almost 8% of the articles, addresses the implications arising from AI acting and learning without direct human supervision, encompassing two main issues: a responsibility gap and AI's moral status.

Source: MIT AI Risk Repositorymit586

ENTITY

3 - Other

INTENT

3 - Other

TIMING

3 - Other

Risk ID

mit586

Domain lineage

7. AI System Safety, Failures, & Limitations

375 mapped risks

7.4 > Lack of transparency or interpretability

Mitigation strategy

- Implement a comprehensive AI Governance and Accountability Framework, explicitly defining the roles, responsibilities, and liabilities of all stakeholders—developers, integrators, and operators—at every stage of the AI lifecycle from design to deployment, thereby systematically closing the 'responsibility gap' before system launch. - Mandate technical measures for enhanced traceability and auditability, ensuring continuous, immutable logging of all training data, model iterations, decision rationales, and human-in-the-loop (HITL) interventions to allow for post-incident forensic analysis and regulatory compliance assessment. - Proactively address potential legal and contractual ambiguities by embedding AI-specific liability clauses in vendor agreements and internal policies to ensure that a designated human or corporate entity remains the legally accountable party for all outcomes generated by the autonomous system.