Back to the MIT repository
7. AI System Safety, Failures, & Limitations3 - Other

Lack of transparency

The idea of a black box making decisions without any explanation, without offering insight in the process, has a couple of disadvantages: it may fail to gain the trust of its users and it may fail to meet regulatory standards such as the ability to audit.

Source: MIT AI Risk Repositorymit90

ENTITY

2 - AI

INTENT

2 - Unintentional

TIMING

3 - Other

Risk ID

mit90

Domain lineage

7. AI System Safety, Failures, & Limitations

375 mapped risks

7.4 > Lack of transparency or interpretability

Mitigation strategy

1. Deploy **Explainable AI (XAI) and Interpretability Techniques** to convert opaque "black box" decisions into human-understandable rationale. This involves integrating methods such as SHAP, LIME, or attention analysis to provide actionable, auditable insights into both local (per-instance) and global (model behavior) model performance and feature contribution. 2. Institute a comprehensive **AI Governance and Auditability Framework** that mandates rigorous, continuous documentation and independent auditing throughout the AI system's lifecycle. This ensures traceability, establishes accountability for outcomes, and provides the necessary evidence packages to demonstrate compliance with regulatory standards that require audit capabilities. 3. Establish a policy for **Proactive Transparency and Public Disclosure** to inform all relevant stakeholders about the system's operation. This includes issuing a clear, accessible public-facing AI notice detailing the model's purpose, data lineage, known limitations, bias mitigation strategies, and explicitly notifying users when they are interacting with an automated system.