Back to the MIT repository
7. AI System Safety, Failures, & Limitations2 - Post-deployment

Inaccessible training data

Without access to the training data, the types of explanations a model can provide are limited and more likely to be incorrect.

Source: MIT AI Risk Repositorymit1311

ENTITY

2 - AI

INTENT

2 - Unintentional

TIMING

2 - Post-deployment

Risk ID

mit1311

Domain lineage

7. AI System Safety, Failures, & Limitations

375 mapped risks

7.4 > Lack of transparency or interpretability

Mitigation strategy

1. Implement robust post-hoc, model-agnostic explainability (PHE) techniques (e.g., LIME or SHAP) to generate understandable justifications for individual black-box decisions, thereby creating the necessary transparency and traceability that the inaccessible training data precludes. 2. Establish a continuous monitoring and auditing framework that utilizes disaggregated performance metrics to assess equitable outcomes and detect functional disparities across various user groups, which may reveal biases embedded by opaque training data. 3. Conduct systematic, model-agnostic bias and fairness audits on the model's outputs using external toolkits to identify and mitigate discrimination that cannot be detected via internal data inspection due to inaccessibility.