Back to the MIT repository
6. Socioeconomic and Environmental2 - Post-deployment

Unfair distribution of benefits from model access

Unfairly allocating or withholding benefits from certain groups due to hardware, software, or skills constraints or deployment contexts (e.g. geographic region, internet speed, devices)

Source: MIT AI Risk Repositorymit280

ENTITY

1 - Human

INTENT

2 - Unintentional

TIMING

2 - Post-deployment

Risk ID

mit280

Domain lineage

6. Socioeconomic and Environmental

262 mapped risks

6.1 > Power centralization and unfair distribution of benefits

Mitigation strategy

1. Prioritize investments in digital literacy and specialized skills training across all impacted demographic and geographic segments to directly address the proficiency constraints hindering equitable model access and utility. 2. Implement robust, fairness-aware design and validation frameworks, including continuous bias audits, to ensure model performance metrics and benefit allocation mechanisms are equitable and non-discriminatory across groups defined by technical constraints (e.g., hardware, internet speed). 3. Establish clear, auditable governance policies that mandate transparency and monitoring of deployment contexts to proactively detect and mitigate systemic biases resulting from hardware, software, or geographic limitations in the distribution of model-derived benefits.

ADDITIONAL EVIDENCE

Example: Better hiring and promotion pathways for people with access to generative AI models (Gmyrek et al., 2023)±