Back to the MIT repository
7. AI System Safety, Failures, & Limitations3 - Other

Lack of model transparency

Lack of model transparency is due to insufficient documentation of the model design, development, and evaluation process and the absence of insights into the inner workings of the model.

Source: MIT AI Risk Repositorymit1326

ENTITY

1 - Human

INTENT

2 - Unintentional

TIMING

3 - Other

Risk ID

mit1326

Domain lineage

7. AI System Safety, Failures, & Limitations

375 mapped risks

7.4 > Lack of transparency or interpretability

Mitigation strategy

1. Mandate Standardized Public Disclosure: Require the public disclosure of comprehensive documentation, such as System Cards and a Secure Development Framework, summarizing model design, testing procedures, evaluation results, and risk mitigation strategies, subject only to appropriate redactions for sensitive safety information. 2. Employ Explainable AI (XAI) Methodologies: Integrate and leverage various Explainable AI techniques, including visualization methods, feature importance analysis, and model-generated explanations, to provide actionable insights into the inner workings and decision-making logic of 'black box' models. 3. Establish Robust AI Governance and Auditing: Implement a formal, comprehensive AI governance framework that mandates continuous monitoring, regular fairness and transparency audits, and the maintenance of immutable, self-documenting audit trails throughout the model's entire lifecycle to ensure sustained accountability and compliance.