Back to the MIT repository
7. AI System Safety, Failures, & Limitations3 - Other

Explainability & Transparency

The feasibility of understanding and interpreting an AI system's decisions and actions, and the openness of the developer about the data used, algorithms employed, and decisions made. Lack of these elements can create risks of misuse, misinterpretation, and lack of accountability.

Source: MIT AI Risk Repositorymit161

ENTITY

2 - AI

INTENT

3 - Other

TIMING

3 - Other

Risk ID

mit161

Domain lineage

7. AI System Safety, Failures, & Limitations

375 mapped risks

7.4 > Lack of transparency or interpretability

Mitigation strategy

1. Deploy state-of-the-art Explainable AI (XAI) techniques, such as SHAP (SHapley Additive exPlanations) and LIME (Local Interpretable Model-agnostic Explanations), to provide post-hoc, human-comprehensible justifications for individual AI decisions, thereby ensuring both model interpretability and decision traceability. 2. Mandate the creation and regular publication of detailed AI Governance Documentation, including Model Cards and Transparency Reports, to disclose the system's design, training data sources, model limitations, and established accountability structures to all relevant stakeholders. 3. Establish a rigorous continuous monitoring program to track model performance and interpretability metrics over time, complemented by regular, independent third-party audits to validate transparency claims and assess alignment with established standards (e.g., NIST AI RMF).