Back to the MIT repository
6. Socioeconomic and Environmental2 - Post-deployment

Access and Opportunity risks

The most serious access-related risks posed by advanced AI assistants concern the entrenchment and exacerbation of existing inequalities (World Inequality Database) or the creation of novel, previously unknown, inequities. While advanced AI assistants are novel technology in certain respects, there are reasons to believe that – without direct design interventions – they will continue to be affected by inequities evidenced in present-day AI systems (Bommasani et al., 2022a). Many of the access-related risks we foresee mirror those described in the case studies and types of differential access.

Source: MIT AI Risk Repositorymit424

ENTITY

1 - Human

INTENT

2 - Unintentional

TIMING

2 - Post-deployment

Risk ID

mit424

Domain lineage

6. Socioeconomic and Environmental

262 mapped risks

6.1 > Power centralization and unfair distribution of benefits

Mitigation strategy

1. Institute mandatory, comprehensive bias mitigation processes throughout the AI lifecycle, from data collection and curation (utilizing representative datasets) to model training (employing fairness-aware algorithms and constraints). 2. Establish a continuous audit framework utilizing fairness metrics (e.g., disparate impact) and explainability reports to monitor and validate the equitable performance and access outcomes of the deployed AI system across diverse demographic groups. 3. Prioritize design interventions focused on reducing the "AI divide," specifically by promoting the development and widespread accessibility of user-friendly and cost-efficient AI assistants to ensure the equitable distribution of socioeconomic benefits.