Model bias
While data bias is a major contributor of model bias, model bias actually manifests itself in different forms and shapes, such as presentation bias, model evaluation bias, and popularity bias. In addition, model bias arises from various sources [62], such as AI/ML model selection (e.g., support vector machine, decision trees), regularization methods, algorithm configurations, and optimization techniques.
ENTITY
3 - Other
INTENT
2 - Unintentional
TIMING
1 - Pre-deployment
Risk ID
mit337
Domain lineage
1. Discrimination & Toxicity
1.1 > Unfair discrimination and misrepresentation
Mitigation strategy
1. Prioritize pre-processing techniques to ensure the training data is both diverse and representative of the target population. This includes rigorous bias identification and analysis of the data, followed by balancing methods such as oversampling, undersampling, relabeling, or perturbation to minimize the influence of historical or popularity biases before model training. 2. Employ in-processing algorithmic techniques by integrating explicit fairness constraints or regularization terms into the model's objective function during training. Strategies such as fairness-aware algorithms, adversarial debiasing, or reweighing training instances can be utilized to ensure the model's learning process actively minimizes disparities across different subgroups. 3. Establish a robust, continuous post-deployment governance framework that includes regular auditing and evaluation using established fairness metrics (e.g., Equal Opportunity Difference or Disparate Impact Analysis). Apply post-processing correction methods to adjust final model outputs (e.g., re-ranking or probability calibration) to ensure equitable outcomes, and enhance model interpretability via Explainable AI (XAI) to facilitate transparency and accountability.