Back to the MIT repository
6. Socioeconomic and Environmental2 - Post-deployment

Equality and inequality

AI assistant technology, like any service that confers a benefit to a user for a price, has the potential to disproportionately benefit economically richer individuals who can afford to purchase access (see Chapter 15). On a broader scale, the capabilities of local infrastructure may well bottleneck the performance of AI assistants, for example if network connectivity is poor or if there is no nearby data centre for compute. Thus, we face the prospect of heterogeneous access to technology, and this has been known to drive inequality (Mirza et al., 2019; UN, 2018; Vassilakopoulou and Hustad, 2023). Moreover, AI assistants may automate some jobs of an assistive nature, thereby displacing human workers; a process which can exacerbate inequality (Acemoglu and Restrepo, 2022; see Chapter 17). Any change to inequality almost certainly implies an alteration to the network of social interactions between humans, and thus falls within the frame of cooperative AI. AI assistants will arguably have even greater leverage over inequality than previous technological innovations. Insofar as they will play a role in mediating human communication, they have the potential to generate new ‘in-group, out-group’ effects (Efferson et al., 2008; Fu et al., 2012). Suppose that the users of AI assistants find it easier to schedule meetings with other users. From the perspective of an individual user, there are now two groups, distinguished by ease of scheduling. The user may experience cognitive similarity bias whereby they favour other users (Orpen, 1984; Yeong Tan and Singh, 1995), further amplified by ease of communication with this ‘in-group’. Such effects are known to have an adverse impact on trust and fairness across groups (Chae et al., 2022; Lei and Vesely, 2010). Insomuch as AI assistants have general-purpose capabilities, they will confer advantages on users across a wider range of tasks in a shorter space of time than previous technologies. While the telephone enabled individuals to communicate more easily with other telephone users, it did not simultaneously automate aspects of scheduling, groceries, job applications, rent negotiations, psychotherapy and entertainment. The fact that AI assistants could affect inequality on multiple dimensions simultaneously warrants further attention (see Chapter 15).

Source: MIT AI Risk Repositorymit419

ENTITY

1 - Human

INTENT

3 - Other

TIMING

2 - Post-deployment

Risk ID

mit419

Domain lineage

6. Socioeconomic and Environmental

262 mapped risks

6.1 > Power centralization and unfair distribution of benefits

Mitigation strategy

1. Prioritize the development and implementation of universal access policies and tiered pricing models to systematically bridge the socioeconomic and infrastructural AI divide. This encompasses public and private investment in essential network connectivity and localized compute infrastructure to prevent performance bottlenecks and ensure equitable technological access across diverse geographic and economic strata. 2. Mandate substantial investment in proactive human capital strategies, including large-scale re-skilling and upskilling initiatives for the workforce, particularly targeting occupations highly susceptible to AI-driven automation and displacement. The objective is to facilitate the transition of workers into emerging roles and promote a paradigm of human-AI augmentation rather than substitution. 3. Establish and enforce robust regulatory frameworks requiring algorithmic transparency, external auditing, and bias impact assessments throughout the AI assistant lifecycle. This must specifically focus on detecting and mitigating biases embedded in training data and deployment outcomes that could foster new 'in-group, out-group' relational dynamics and exacerbate social inequality.