Back to the MIT repository
7. AI System Safety, Failures, & Limitations2 - Post-deployment

Transparency and explainability

A recurring complaint among participants was a lack of knowledge about how AI systems made judgements. They emphasized the significance of making AI systems more visible and explainable so that people may have confidence in their outputs and hold them accountable for their activities. Because AI systems are typically opaque, making it difficult for users to understand the rationale behind their judgements, ethical concerns about AI, as well as issues of transparency and explainability, arise. This lack of understanding can generate suspicion and reluctance to adopt AI technology, as well as making it harder to hold AI systems accountable for their actions.

Source: MIT AI Risk Repositorymit589

ENTITY

2 - AI

INTENT

2 - Unintentional

TIMING

2 - Post-deployment

Risk ID

mit589

Domain lineage

7. AI System Safety, Failures, & Limitations

375 mapped risks

7.4 > Lack of transparency or interpretability

Mitigation strategy

1. Prioritize the development and implementation of Inherently Interpretable Models or embed Explainable AI (XAI) techniques, such as SHAP or LIME, into the model development lifecycle to provide precise, technically verifiable rationales for specific output decisions. 2. Mandate the comprehensive and accessible public disclosure of information pertaining to the AI system's architecture, training data provenance, risk assessments, and intended operational scope to satisfy stakeholder needs for oversight and build end-user trust. 3. Establish robust AI governance and auditing protocols that integrate continuous human-in-the-loop oversight for critical decisions and periodically subject the system to independent algorithmic scrutiny to validate fairness, accuracy, and explanation consistency post-deployment.