Back to the MIT repository
6. Socioeconomic and Environmental1 - Pre-deployment

Lack of testing diversity

AI model risks are socio-technical, so their testing needs input from a broad set of disciplines and diverse testing practices.

Source: MIT AI Risk Repositorymit1327

ENTITY

1 - Human

INTENT

2 - Unintentional

TIMING

1 - Pre-deployment

Risk ID

mit1327

Domain lineage

6. Socioeconomic and Environmental

262 mapped risks

6.5 > Governance failure

Mitigation strategy

1. Formalize an Interdisciplinary Socio-Technical Testing Framework Establish a governance and operational framework that mandates the systematic engagement of interdisciplinary expertise (e.g., computer science, social science, ethics, law) for holistic threat modeling and risk assessment, explicitly recognizing the socio-technical nature of AI systems. 2. Mandate Diverse Testing and Evaluation Panels Ensure that all human-in-the-loop testing, human review of AI outputs, and evaluation panels for critical decisions include a broad range of demographic and professional backgrounds to incorporate diverse life experiences and worldviews, thereby mitigating blind spots caused by homogeneity of perspective. 3. Execute Continuous Slice-Based and Counterfactual Bias Audits Implement continuous testing workflows that utilize slice-based evaluation to measure model performance and fairness across significant data subgroups and conduct counterfactual testing with varied sensitive attributes to proactively identify and mitigate algorithmic biases that may emerge over time or disproportionately affect specific populations.