Back to the MIT repository
6. Socioeconomic and Environmental3 - Other

Future access risks

AI assistants currently tend to perform a limited set of isolated tasks: tools that classify or rank content execute a set of predefined rules or provide constrained suggestions, and chatbots are often encoded with guardrails to limit the set of conversation turns they execute (e.g. Warren, 2023; see Chapter 4). However, an artificial agent that can execute sequences of actions on the user’s behalf – with ‘significant autonomy to plan and execute tasks within the relevant domain’ (see Chapter 2) – offers a greater range of capabilities and depth of use. This raises several distinct access-related risks, with respect to liability and consent, that may disproportionately affect historically marginalised communities. To repeat, in cases where an action can only be executed with an advanced AI assistant, not having access to the technology (e.g. due to limited internet access, not speaking the ‘right’ language or facing a paywall) means one cannot access that action (consider today’s eBay and Ticketmaster bots). Communication with many utility or commercial providers currently requires (at least initial) interaction with their artificial agents (Schwerin, 2023; Verma, 2023a). It is not difficult to imagine a future in which a user needs an advanced AI assistant to interface with a more consequential resource, such as their hospital for appointments or their phone company to obtain service. Cases of inequitable performance, where the assistant systematically performs less well for certain communities (situation type 2), could impose serious costs on people in these contexts. Moreover, advanced AI assistants are expected to be designed to act in line with user expectations. When acting on the user’s behalf, an assistant will need to infer aspects of what the user wants. This process may involve interpretation to decide between various sources of information (e.g. stated preferences and inference based on past feedback or user behaviour) (see Chapter 5). However, cultural differences will also likely affect the system’s ability to make an accurate inference. Notably, the greater the cultural divide, say between that of the developers and the data on which the agent was trained and evaluated on, and that of the user, the harder it will be to make reliable inferences about user wants (e.g. Beede et al., 2020; Widner et al., 2023), and greater the likelihood of performance failures or value misalignment (see Chapter 11). This inference gap could make many forms of indirect opportunity inaccessible, and as past history indicates, there is the risk that harms associated with these unknowns may disproportionately fall upon those already marginalised in the design process.

Source: MIT AI Risk Repositorymit427

ENTITY

3 - Other

INTENT

3 - Other

TIMING

3 - Other

Risk ID

mit427

Domain lineage

6. Socioeconomic and Environmental

262 mapped risks

6.1 > Power centralization and unfair distribution of benefits

Mitigation strategy

1. Establish a comprehensive regulatory framework mandating non-AI-mediated access pathways for all essential public, utility, and consequential commercial services to ensure universal access for populations limited by socioeconomic, linguistic, or infrastructure constraints. 2. Develop and enforce a robust protocol for algorithmic equity auditing, requiring system developers to test for and mitigate performance failures, inference gaps, and value misalignment across a spectrum of cultural, linguistic, and socioeconomic demographics, thereby minimizing the disproportionate imposition of costs on marginalized communities. 3. Incentivize and fund public-interest AI research and development, including open-source large language models and public utility agents, to provide accessible, high-quality alternatives that mitigate the emergent "gate tax" and concentration of capabilities associated with proprietary frontier AI systems.