Back to the MIT repository
7. AI System Safety, Failures, & Limitations2 - Post-deployment

Improper retraining

Using undesirable output (for example, inaccurate, inappropriate, and user content) for retraining purposes can result in unexpected model behavior.

Source: MIT AI Risk Repositorymit1283

ENTITY

1 - Human

INTENT

2 - Unintentional

TIMING

2 - Post-deployment

Risk ID

mit1283

Domain lineage

7. AI System Safety, Failures, & Limitations

375 mapped risks

7.3 > Lack of capability or robustness

Mitigation strategy

1. Implement rigorous, multi-stage data curation, validation, and sanitization pipelines to ensure that all data—especially user-generated or model-generated content—is verified for factual accuracy, ethical compliance, and representativeness prior to inclusion in the retraining dataset, thereby mitigating data poisoning and bias injection. 2. Establish a mandatory Human-in-the-Loop (HITL) governance gate for critical retraining cycles, requiring expert review and explicit sign-off on both the new training data composition and the resultant model's performance and alignment metrics to prevent the recursive incorporation of undesirable outputs. 3. Deploy continuous, real-time model output filtering and monitoring mechanisms to detect and quarantine inappropriate, inaccurate, or toxic content, preventing it from ever being routed back into the model's training or fine-tuning data streams.