Back to the MIT repository
7. AI System Safety, Failures, & Limitations3 - Other

Model misspecification

Models that are misspecified are known to give rise to inaccurate parameter estimations, inconsistent error terms, and erroneous predictions. All these factors put together will lead to poor prediction performance on unseen data and biased consequences when making decisions [68].

Source: MIT AI Risk Repositorymit338

ENTITY

2 - AI

INTENT

2 - Unintentional

TIMING

3 - Other

Risk ID

mit338

Domain lineage

7. AI System Safety, Failures, & Limitations

375 mapped risks

7.3 > Lack of capability or robustness

Mitigation strategy

1. Prioritize the Conceptual Soundness and Specification of the Ideal Goal: Establish an unambiguous statement of the model's purpose, ensuring the operational "design specification" (e.g., proxy metrics) aligns closely with the unarticulated "ideal specification" (intended outcome) to mitigate fundamental specification errors. 2. Implement Multiply Robust Estimation and Regularization Techniques: Employ advanced statistical and econometric methodologies, such as multiply robust weights derived from multiple candidate models or Tikhonov regularization, to construct estimators that minimize bias and improve inference robustness under conditions of local model misspecification. 3. Conduct Rigorous Out-of-Sample Validation and Robustness Testing: Utilize comprehensive cross-validation protocols (e.g., K-Fold, time-series split) and specialized sensitivity analysis, including the injection of noise and edge case testing, to empirically detect failure to generalize and assess the model's stability against unmodeled inputs or perturbations.