Back to the MIT repository
6. Socioeconomic and Environmental2 - Post-deployment

Current access risks

At the same time, and despite this overall trend, AI systems are also not easily accessible to many communities. Such direct inaccessibility occurs for a variety of reasons, including: purposeful non-release (situation type 1; Wiggers and Stringer, 2023), prohibitive paywalls (situation type 2; Rogers, 2023; Shankland, 2023), hardware and compute requirements or bandwidth (situation types 1 and 2; OpenAI, 2023), or language barriers (e.g. they only function well in English (situation type 2; Snyder, 2023), with more serious errors occurring in other languages (situation type 3; Deck, 2023). Similarly, there is some evidence of ‘actively bad’ artificial agents gating access to resources and opportunities, affecting material well-being in ways that disproportionately penalise historically marginalised communities (Block, 2022; Bogen, 2019; Eubanks, 2017). Existing direct and indirect access disparities surrounding artificial agents with natural language interfaces could potentially continue – if novel capabilities are layered on top of this base without adequate mitigation (see Chapter 3).

Source: MIT AI Risk Repositorymit426

ENTITY

1 - Human

INTENT

1 - Intentional

TIMING

2 - Post-deployment

Risk ID

mit426

Domain lineage

6. Socioeconomic and Environmental

262 mapped risks

6.1 > Power centralization and unfair distribution of benefits

Mitigation strategy

1. Implement an inclusive AI development lifecycle by soliciting input from and actively engaging historically marginalized communities in the design, training data curation, and continuous monitoring phases to preemptively identify and address sources of bias and inequity. 2. Mitigate financial and computational barriers by promoting open-source AI frameworks and prioritizing the development of small, energy-efficient models, thereby reducing the prohibitive paywalls and hardware requirements that restrict access for underserved populations. 3. Establish a comprehensive AI governance framework that mandates algorithmic audits and the integration of disaggregated fairness metrics to systematically detect and resolve biases that may result in 'actively bad' agents gating access to resources and opportunities.