Lack of transparency
In situations in which the development and use of AI are not explained to the user, or in which the decision processes do not provide the criteria or steps that constitute the decision, the use of AI becomes inexplicable.
ENTITY
2 - AI
INTENT
2 - Unintentional
TIMING
2 - Post-deployment
Risk ID
mit130
Domain lineage
7. AI System Safety, Failures, & Limitations
7.4 > Lack of transparency or interpretability
Mitigation strategy
1. Implement a Multi-Faceted Explainable AI (XAI) Strategy Mandate the deployment of both inherently interpretable models and post-hoc explanation techniques. Prioritize inherently transparent models (e.g., decision trees, linear regression) for high-stakes decisions where trade-offs between performance and interpretability are necessary. For complex "black-box" models, utilize model-agnostic methods such as SHAP (SHapley Additive exPlanations) and LIME (Local Interpretable Model-agnostic Explanations) to provide both local (instance-specific prediction rationale) and global (overall feature influence) interpretability insights to technical and auditing personnel. 2. Establish Formal AI System Documentation and Transparency Reporting Develop a robust AI governance framework that mandates the creation of comprehensive documentation artifacts, such as 'Model Cards' or 'System Cards,' throughout the AI lifecycle. This documentation must systematically disclose information regarding the model's architecture, training data provenance, evaluation metrics, risk assessments, and intended use. This documentation serves as the basis for regular, public-facing transparency reports to demonstrate compliance and foster external trust, aligning with established principles for responsible AI deployment and regulatory requirements. 3. Customize and Justify Decision Rationales for End-Users Design the AI system's user interface to present decision explanations in a form tailored to the end-user's cognitive framework and expertise level. The explanation should prioritize 'justifiability' by clearly articulating the key input factors that influenced the outcome in plain, non-technical language. The act of providing this explanation should be immediate and contextually relevant, enhancing user understanding of the system's rationale and promoting accountable, informed reliance on the AI's output.