Transparency
an external entity of an AI-based ecosystem may want to know which parts of data affect the final decision in a learning model
ENTITY
2 - AI
INTENT
3 - Other
TIMING
2 - Post-deployment
Risk ID
mit603
Domain lineage
7. AI System Safety, Failures, & Limitations
7.4 > Lack of transparency or interpretability
Mitigation strategy
1. Implementation of Local, Post-Hoc Explainable AI (XAI) Methods Employ model-agnostic post-hoc explanation techniques, such as SHAP or LIME, to generate precise local feature-attribution explanations for all critical decisions. This will quantify the direct influence of specific input data elements on the final output, providing the external entity with a verifiable, traceable link between data and decision as required for transparency and interpretability. 2. Integration of Transparency and Auditability Requirements within AI Governance Establish a mandatory, auditable AI Transparency Framework aligned with best practices (e.g., NIST AI RMF). This requires the systematic documentation and logging of all model development, training data provenance, feature engineering rationale, and the specific XAI methodologies applied, ensuring that the entire decision-making lifecycle is traceable for external audit and compliance reviews. 3. Design and Deliver Stakeholder-Centric Explanations Develop communication protocols and user interfaces that present the technical model explanations in a meaningful and understandable format, tailored specifically to the non-expert, external entity. Explanations must be cognitively aligned and avoid technical jargon to ensure that the rationale for the model's behavior is clearly comprehended, thereby promoting appropriate trust and mitigating the risk of unwarranted reliance or distrust.