Back to the MIT repository
7. AI System Safety, Failures, & Limitations3 - Other

Accountability

The ability to determine whether a decision was made in accordance with procedural and substantive standards and to hold someone responsible if those standards are not met.

Source: MIT AI Risk Repositorymit630

ENTITY

3 - Other

INTENT

3 - Other

TIMING

3 - Other

Risk ID

mit630

Domain lineage

7. AI System Safety, Failures, & Limitations

375 mapped risks

7.4 > Lack of transparency or interpretability

Mitigation strategy

1. Prioritize the integration of Explainable AI (XAI) techniques and tools to ensure that all model decisions are accompanied by a verifiable, comprehensible, and relevant rationale, directly addressing the lack of transparency or interpretability that hinders compliance determination. 2. Mandate the establishment of a robust AI governance structure with a defined accountability framework, such as one based on the GAO AI Accountability Framework or RACI principles, to clearly and formally assign roles and responsibilities for the ethical design, deployment, monitoring, and corrective action related to the AI system's outputs. 3. Institute continuous, independent AI auditing and validation processes, leveraging established methodologies (e.g., IIA AI Auditing Framework), to systematically assess and document the system's adherence to predefined procedural and substantive standards across the entire AI system lifecycle.