262 canonical MIT risk pages
6. Socioeconomic and Environmental
Distributional, institutional, and environmental risks created by AI deployment.
6. Socioeconomic and Environmental
Ability to automate jobs
The ability to automate jobs by AI models and systems can lead to significant job displacement, economic disruption, and social inequality.
6. Socioeconomic and Environmental
Accelerated development of nanotechnology produces uncontrolled production of toxic nanoparticles
AI is a key component for the development of nanobots, which could have dangerous environmental implications by invisibly modifying substances at nanoscale. For example, nanobots could start chemical reactions that would create invisible nanoparticles that are toxic and potentially lethal.
6. Socioeconomic and Environmental
Access and Opportunity risks
The most serious access-related risks posed by advanced AI assistants concern the entrenchment and exacerbation of existing inequalities (World Inequality Database) or the creation of novel, previously unknown, inequities. While advanced AI assistants are novel technology in certain respects, there are reasons to believe that – without direct design interventions – they will continue to be affected by inequities evidenced in present-day AI systems (Bommasani et al., 2022a). Many of the access-related risks we foresee mirror those described in the case studies and types of differential access.
6. Socioeconomic and Environmental
Accidents Are Hard to Avoid
accidents can cascade into catastrophes, can be caused by sudden unpredictable developments and it can take years to find severe flaws and risks (not a quote)
6. Socioeconomic and Environmental
AI jurisprudence
When considering legal frameworks, we note that at present no such framework has been identified in literature which would apply blame and responsibility to an autonomous agent for its actions. (Though we do suggest that the recent establishment of laws regarding autonomous vehicles may provide some early frameworks that can be evaluated for efficacy and gaps in future research.) Frequently the literature refers to existing liability and negligence laws which might apply to the manufacturer or operator of a device.
6. Socioeconomic and Environmental
AI Law and Regulation
This area strongly focuses on the control of AI by means of mechanisms like laws, standards or norms that are already established for different technological applications. Here, there are some challenges special to AI that need to be addressed in the near future, including the governance of autonomous intelligence systems, responsibility and accountability for algorithms as well as privacy and data security.
6. Socioeconomic and Environmental
AI Race (Environmental/Structural)
The immense potential of AIs has created competitive pressures among global players contending for power and influence. This “AI race” is driven by nations and corporations who feel they must rapidly build and deploy AIs to secure their positions and survive.
6. Socioeconomic and Environmental
AI-based automation increases income inequality
It seems quite plausible that progress in reinforcement learning and language models specifically could make it possible to automate a large amount of manual labour and knowledge work respectively [35, 45, 69], leading to widespread unemployment, and the wages for many remaining jobs being driven down by increased supply.
6. Socioeconomic and Environmental
Algorithmic monoculture
The dominance of specific AI models could lead to a lack of diversity in approaches, amplifying systemic risks if these models fail.
6. Socioeconomic and Environmental
Art - Creativity
In this cluster, concerns about negative impacts on human creativity, particularly through text-to-image models, are prevalent. Papers criticize financial harms or economic losses for artists due to the widespread generation of synthetic art as well as the unauthorized and uncompensated use of artists' works in training datasets. Additionally, given the challenge of distinguishing synthetic images from authentic ones, there is a call for systematically disclosing the non-human origin of such content, particularly through watermarking. Moreover, while some sources argue that text-to-image models lack 'true' creativity or the ability to produce genuinely innovative aesthetics, others point out positive aspects regarding the acceleration of human creativity.
6. Socioeconomic and Environmental
Auditor capacity mismatch
Auditors may not be able to address all of the specific safety, performance, or validation needs. Reports of passing audits may be more inclusive than can be justified due to a lack of knowledge of specific risks and how they can be tested, or a lack of capacity to perform sufficiently rigorous testing.
6. Socioeconomic and Environmental
Auditor failure
Auditors may not publicly disclose risks they find, may be required to not pub- licize shortcomings, or may not receive sufficient cooperation from the relevant internal parties.
6. Socioeconomic and Environmental
Authenticity
As the advancement of generative AI increases, it becomes harder to determine the authenticity of a piece of work. Photos that seem to capture events or people in the real world may be synthesized by DeepFake AI. The power of generative AI could lead to large-scale manipulations of images and videos, worsening the problem of the spread of fake information or news on social media platforms (Gragnaniello et al., 2022). In the field of arts, an artistic portrait or music could be the direct output of an algorithm. Critics have raised the issue that AI-generated artwork lacks authenticity since algorithms tend to generate generic and repetitive results (McCormack et al., 2019).
6. Socioeconomic and Environmental
Automation, Access and Environmental Harms
Harms that arise from environmental or downstream economic impacts of the language model
6. Socioeconomic and Environmental
Benchmark Inaccuracy (Benchmark saturation)
Benchmark saturation refers to benchmarks reaching their evaluation ceiling. The tendency towards benchmark saturation has been demonstrated in various benchmarks [19]. When benchmarks reach or are close to saturation, they stop being effective measures for new models, as more nuanced capability gains might not be detected.
6. Socioeconomic and Environmental
Benchmark Inaccuracy (Benchmarks may not accurately evaluate capabilities)
Benchmarks of AI systems can both underestimate and overestimate the capa- bilities of those AI systems. Underestimates can happen if an evaluation is not comprehensive enough, if the benchmark is saturated by existing models, or if the capabilities in question depend on a complicated setup, such as realistic computer programming tasks. Overestimates of capabilities can occur if an AI system is trained or fine-tuned on the contents of the benchmark, leading to overfitting.
6. Socioeconomic and Environmental
Benchmark Limitations (Insufficient benchmarks for AI safety evaluation)
Benchmarks dedicated to measuring the performance of AI systems (e.g., on programming or math tasks) are more well-developed than those for assessing safety and harms in AI systems [234]. This gap can lead to AI systems excelling in specific tasks while exhibiting harmful behaviors that go undetected. More safety-related evaluation datasets can help in identifying previously overlooked undesirable model behaviors.
6. Socioeconomic and Environmental
Benchmark Limitations (Underestimating capabilities that are not covered by benchmarks)
A lack of test coverage by benchmarks on specific abilities of a model can obscure the model’s capabilities from both the developer and the user [160]. This can lead to a false sense of safety and trust due to a lack of understanding of the model’s limitations.
6. Socioeconomic and Environmental
Benchmarking (Annotation contamination)
Annotation contamination refers to scenarios where the model is exposed to the benchmark labels during training [170]. This type of contamination can make the model learn the acceptable distribution of outputs. Combining this with raw data contamination of the test split, any evaluation made with the benchmark is invalidated because the entire test split is essentially leaked to the model.
6. Socioeconomic and Environmental
Benchmarking (Benchmark leakage or data contamination)
Benchmark leakage [235, 224, 221, 161] can happen when an AI model is trained or fine-tuned with evaluation-related data. This can lead to an unreliable model evaluation, especially if the data contains question-answer pairs from bench- marks.
6. Socioeconomic and Environmental
Benchmarking (Cross-lingual data contamination)
Models that have been trained on data encoded in multiple languages, such as LLMs trained on web-crawled data, may contain contamination that is obscured by translation [226]. The most basic form of this is when a benchmark is trans- lated to another language and then fed to the model as training data. The fact that the benchmark is translated before becoming training data can obscure the contamination from detection methods, giving false assurance that the model has generalized on the capabilities that the benchmark tests for.
6. Socioeconomic and Environmental
Benchmarking (Guideline contamination)
Guideline contamination refers to scenarios where instructions for the collec- tion, annotation, or use of the dataset are exposed to the model [170]. These instructions may contain explicit data-label pairs that can improve the model’s capabilities for the task.
6. Socioeconomic and Environmental
Benchmarking (Post-deployment contamination)
Once a model is deployed, it can be exposed to benchmark data provided by the users [95, 170]. The model may then be further trained by these user inputs containing benchmark data.
6. Socioeconomic and Environmental
Benchmarking (Raw data contamination)
This type of contamination [170] occurs when the raw and unlabeled data of a benchmark is used as part of the training set. Such data may not be properly formatted and may contain noise, especially if the contamination happens before the data is pre-processed into the benchmark. If this contamination occurs, it could cast doubt on the few-shot and zero-shot performance of the model on that benchmark.
6. Socioeconomic and Environmental
Between-country issues: global inequality
There is an even greater divide between the countries currently leading in AI and those falling behind. While AI is widely considered a national priority, with almost 40% of countries having created an AI strategy [437], the implementation of these strategies depends on scarce resources, including trained STEM talent and computing power. These resources are predictably concentrated: 59% of leading AI researchers currently work in the US, and another 20% in China and Europe [372]. Figure 9 shows post-college migration among AI researchers who have published at one top conference, as of 2019.
6. Socioeconomic and Environmental
Biodiversity loss
Biodiversity loss - Over-expansion of technology infrastructure, or inadequate alignment of technology with sustainable practices, leading to deforestation, habitat destruction, and fragmentation and loss of biodiversity.
6. Socioeconomic and Environmental
Building an AI able to adapt to humans
This category involves almost 9% of the articles and deals with ethical concerns arising from AI's capacity to interact with humans in the workplace.
6. Socioeconomic and Environmental
Capabilities that enable substitution of humans
The progressive replacement of human roles by AI models and systems can lead to societal disruption.
6. Socioeconomic and Environmental
Carbon emissions
Carbon emissions - Release of carbon dioxide, nitric oxide and other gases, increasing carbon emissions, exacerbating climate change, and negatively impacting local communities.
6. Socioeconomic and Environmental
Challenges in perceiving, measuring, and recognizing harm
Harm from AI often manifests subtly or over the long term, making it difficult to identify, measure, and address effectively.
6. Socioeconomic and Environmental
Combination failures
Harms could result from a combination of regulatory, management, and operational failures.
6. Socioeconomic and Environmental
Competing for jobs
AI agents may compete against humans for jobs, though history shows that when a technology replaces a human job, it creates new jobs that need more skills.
6. Socioeconomic and Environmental
Competitive pressures in GPAI product release
In competitive situations, developers of general-purpose AI systems might cut corners on the safety evaluation of their GPAI model and instead spend more time and effort on the capabilities of those systems [183, 69]. This is especially dangerous if the capabilities of such AI systems are correlated with the risk they pose [162].
6. Socioeconomic and Environmental
Complex attribution and responsibility
When multiple actors are involved in AI development and deployment, it becomes difficult to assign responsibility for harm, complicating accountability.
6. Socioeconomic and Environmental
Compliance
The potential for AI systems to violate laws, regulations, and ethical guidelines (including copyrights). Non-compliance can lead to legal penalties, reputation damage, and loss of trust.While other risks in our taxonomy apply to system developers, users, and broader society, this risk is generally restricted to the former two groups.
6. Socioeconomic and Environmental
Concentration of Authority
Use of generative AI systems to contribute to authoritative power and reinforce dominant values systems can be intentional and direct or more indirect. Concentrating authoritative power can also exacerbate inequality and lead to exploitation.
6. Socioeconomic and Environmental
Concentration of market power (Negative effects of increased market concentration)
The concentration of AI assets—encompassing data, hardware, and expertise—within a small group of global tech firms raises many concerns.564 Such a situation may stifle healthy competition, impede innovation, and potentially result in elevated costs for accessing AI technologies. Firms with control over essential resources for developing AI models may restrict access to these resources to prevent competition. For instance, if, in the future, training AI models increasingly relies on proprietary data, smaller organizations lacking access to such data might encounter significant barriers to entry and growth.
6. Socioeconomic and Environmental
Concentration of market power (Trend toward market concentration)
In the generative AI market, barriers to entry are very high. Developers need access to vast volumes of data, computational resources, technical expertise, and capital. Large technology companies with such access are able to exploit economies of scale, economies of scope, and feedback effects (learning effects from user- generated data).542 All this gives them an overwhelming advantage over smaller companies, making competition increasingly challenging for these smaller entities.
6. Socioeconomic and Environmental
Concentration of Power
Governments might pursue intense surveillance and seek to keep AIs in the hands of a trusted minority. This reaction, however, could easily become an overcorrection, paving the way for an entrenched totalitarian regime that would be locked in by the power and capacity of AIs
6. Socioeconomic and Environmental
Conflicts of interest in auditor selection
Conflicts of interest can arise if there is no independence in the auditor selection process or if the auditors are closely associated with the developer [123, 157]. In such cases, the conflict of interest can appear even if third-party evaluators are involved. In the case of external auditing, the potential candidates might be selected from a narrow group of auditors, or have conflicting financial incentives for whether to report model shortcomings publicly.
6. Socioeconomic and Environmental
Copyright
The memorization effect of LLM on training data can enable users to extract certain copyright-protected content that belongs to the LLM’s training data.
6. Socioeconomic and Environmental
Copyright
According to the U.S. Copyright Office (n.d..), copyright is a type of intellectual property that protects original works of authorship as soon as an author fixes the work in a tangible form of expression (U.S. Copyright Office, n.d..). Generative AI is designed to generate content based on the input given to it. Some of the contents generated by AI may be others' original works that are protected by copyright laws and regulations. Therefore, users need to be careful and ensure that generative AI has been used in a legal manner such that the content that it generates does not violate copyright (Pavlik, 2023). Another relevant issue is whether generative AI should be given authorship (Sallam, 2023). Murray (2023) discussed generative art linked to non-fungible tokens (NFTs) and indicated that according to current U.S. copyright laws, generative art lacks copyrightability because it is generated by a non-human. The issue of AI authorship affects copyright law's underlying assumptions about creativity (Bridy, 2012).
6. Socioeconomic and Environmental
Copyright - Authorship
The emergence of generative AI raises issues regarding disruptions to existing copyright norms. Frequently discussed in the literature are violations of copyright and intellectual property rights stemming from the unauthorized collection of text or image training data. Another concern relates to generative models memorizing or plagiarizing copyrighted content. Additionally, there are open questions and debates around the copyright or ownership of model outputs, the protection of creative prompts, and the general blurring of traditional concepts of authorship.
6. Socioeconomic and Environmental
Copyright challenges (copyright-infringing output)
Even though models generally create new outputs, it is possible that the content produced by a generative AI tool—such as an image, or even computer code— could turn out to be almost identical to that used in the training data. Given that generative AI models tend to memorize fragments of their training data, they might reproduce these fragments, potentially leading to charges of copyright infringement.
6. Socioeconomic and Environmental
Copyright challenges (training models using copyrighted output)
Generative AI companies are regularly accused of violating copyright law by training AI models on copyrighted works without gaining permission or paying compensation to the copyright owners. In fact, a substantial number of copyrighted documents and books have been incorporated into the training datasets of generative AI models.
6. Socioeconomic and Environmental
Copyright infringement
The use of large amounts of copyrighted data for training general- purpose AI models poses a challenge to traditional intellectual property laws, and to systems of consent, compensation, and control over data. The use of copyrighted data at scale by organisations developing general- purpose AI is likely to alter incentives around creative expression.
6. Socioeconomic and Environmental
Copyright infringement
A model might generate content that is similar or identical to existing work protected by copyright or covered by open-source license agreement.
6. Socioeconomic and Environmental
Copyright Violation
LLM systems may output content similar to existing works, infringing on copyright owners.
6. Socioeconomic and Environmental
Corporate AI Race
Although competition between companies can be beneficial, creating more useful products for consumers, there are also pitfalls. First, the benefits of economic activity may be unevenly distributed, incentivizing those who benefit most from it to disregard the harms to others. Second, under intense market competition, businesses tend to focus much more on short-term gains than on long-term outcomes. With this mindset, companies often pursue something that can make a lot of profit in the short term, even if it poses a societal risk in the long term.
6. Socioeconomic and Environmental
Corporate power may impeded effective governance
The increasing power and influence of large corporations may make effective governance difficult. There exists a power asymmetry between corporate entities profiting from LLMs and other social groups (e.g. civil society). State-of-the-art LLMs are developed by or in partnership with, some of the world’s largest private tech companies...This poses a risk of governance protocols related to LLMs becoming excessively favorable to tech companies, potentially leading to regulatory capture at the cost of the interests of other societal groups, particularly marginalized communities who have historically been disproportionately affected by poorly designed AI technologies (Reventlow, 2021).
6. Socioeconomic and Environmental
Cultural dispossession
Cultural dispossession - Intentional and/or unintentional erasure of cultural goods and values, such as ways of speaking, expressing humour, or sounds and voices that contribute to a cultural identity, or their inappropriate re-use in other cultures.
6. Socioeconomic and Environmental
Current access risks
At the same time, and despite this overall trend, AI systems are also not easily accessible to many communities. Such direct inaccessibility occurs for a variety of reasons, including: purposeful non-release (situation type 1; Wiggers and Stringer, 2023), prohibitive paywalls (situation type 2; Rogers, 2023; Shankland, 2023), hardware and compute requirements or bandwidth (situation types 1 and 2; OpenAI, 2023), or language barriers (e.g. they only function well in English (situation type 2; Snyder, 2023), with more serious errors occurring in other languages (situation type 3; Deck, 2023). Similarly, there is some evidence of ‘actively bad’ artificial agents gating access to resources and opportunities, affecting material well-being in ways that disproportionately penalise historically marginalised communities (Block, 2022; Bogen, 2019; Eubanks, 2017). Existing direct and indirect access disparities surrounding artificial agents with natural language interfaces could potentially continue – if novel capabilities are layered on top of this base without adequate mitigation (see Chapter 3).
6. Socioeconomic and Environmental
Dangerous development races
Competitive pressures could lead to the neglect of safety measures in AI development.
6. Socioeconomic and Environmental
Data and Content Moderation Labor
Two key ethical concerns in the use of crowdwork for generative AI systems are: crowdworkers are frequently subject to working conditions that are taxing and debilitative to both physical and mental health, and there is a widespread deficit in documenting the role crowdworkers play in AI development. This contributes to a lack of transparency and explainability in resulting model outputs. Manual review is necessary to limit the harmful outputs of AI systems, including generative AI systems. A common harmful practice is to intentionally employ crowdworkers with few labor protections, often taking advantage of highly vulnerable workers, such as refugees [119, p. 18], incarcerated people [54], or individuals experiencing immense economic hardship [98, 181]. This precarity allows a myriad of harmful practices, such as companies underpaying or even refusing to pay workers for completed work (see Gray and Suri [93, p. 90] and Berg et al. [29, p. 74]), with no avenues for worker recourse. Finally, critical aspects of crowdwork are often left poorly documented, or entirely undocumented [88].
6. Socioeconomic and Environmental
Democracy
The erosion of democratic processes and public trust in social/political institutions.
6. Socioeconomic and Environmental
Demographic diversity of researchers
The AI research establishment inherits patterns of under-representation that are dominant in most technical elds. In North America, large parts of professional AI research require a Ph.D., yet less than 25% of Ph.D. computer scientists are women, and fewer than 2% are Black or African American [608]. This holds globally and outside the research community: LinkedIn data suggests that only 22% of AI professionals are women [161]. Since the vast majority of AI practitioners work for private companies, limited corporate statistics on gender and racial diversity hinder a full understanding of the situation [402], but those few statistics that exist are not encouraging: only 5% of Google and 7% of Microsoft employees are Black or African American, with potentially even lower representation at the more senior levels [212, 384].
6. Socioeconomic and Environmental
Dependency on providers
Excessive reliance on specific AI providers can lead to vulnerabilities due to lack of alternatives or interoperability.
6. Socioeconomic and Environmental
Design of AI
ethical concerns regarding how AI is designed and who designs it
6. Socioeconomic and Environmental
Destabilising political impacts from AI systems
(e.g., polarization, legitimacy of elections), international political economy, or international security196 in terms of the balance of power, technology races and international stability, and the speed and character of war
6. Socioeconomic and Environmental
Devaluation of Labor & Heightened Economic Inequality
According to a White House report, much of the development and adoption of AI is intended to automate rather than augment work. The report notes that a focus on automation could lead to a less democratic and less fair labor market...In addition, generative AI fuels the continued global labor disparities that exist in the research and development of AI technologies... The development of AI has always displayed a power disparity between those who work on AI models and those who control and profit from these tools. Overseas workers training AI chatbots or people whose online content has been involuntarily fed into the training models do not reap the enormous profits that generative AI tools accrue. Instead, companies exploiting underpaid and replaceable workers or the unpaid labor of artists and content creators are the ones coming out on top. The development of generative AI technologies only contributes to this power disparity, where tech companies that heavily invest in generative AI tools benefit at the expense of workers.
6. Socioeconomic and Environmental
Development of unsafe AGI
The risks associated with the race to develop the first AGI, including the development of poor quality and unsafe AGI, and heightened political and control issues.
6. Socioeconomic and Environmental
Difficult to develop metrics for evaluating benefits or harms caused by AI assistants
Another difficulty facing AI assistant systems is that it is challenging to develop metrics for evaluating particular aspects of benefits or harms caused by the assistant – especially in a sufficiently expansive sense, which could involve much of society (see Chapter 19). Having these metrics is useful both for assessing the risk of harm from the system and for using the metric as a training signal.
6. Socioeconomic and Environmental
Digital divide
The digital divide is often defined as the gap between those who have and do not have access to computers and the Internet (Van Dijk, 2006). As the Internet gradually becomes ubiquitous, a second-level digital divide, which refers to the gap in Internet skills and usage between different groups and cultures, is brought up as a concern (Scheerder et al., 2017). As an emerging technology, generative AI may widen the existing digital divide in society. The “invisible” AI underlying AI-enabled systems has made the interaction between humans and technology more complicated (Carter et al., 2020). For those who do not have access to devices or the Internet, or those who live in regions that are blocked by generative AI vendors or websites, the first-level digital divide may be widened between them and those who have access (Bozkurt & Sharma, 2023). For those from marginalized or minority cultures, they may face language and cultural barriers if their cultures are not thoroughly learned by or incorporated into generative AI models. Furthermore, for those who find it difficult to utilize the generative AI tool, such as some elderly, the second-level digital divide may emerge or widen (Dwivedi et al., 2023). To deal with the digital divide, having more accessible AI as well as AI literacy training would be beneficial.
6. Socioeconomic and Environmental
Direct competition with humans
One or more artificial agent(s) could have the capacity to directly outcompete humans, for example through capacity to perform work faster, better adaptation to change, vaster knowledge base to draw from, etc. This may result in human labor becoming more expensive or less effective than artificial labor, leading to redundancies or extinction of the human labor force.
6. Socioeconomic and Environmental
Disparate access to benefits due to hardware, software, skill constraints
Due to differential internet access, language, skill, or hardware requirements, the benefits from LMs are unlikely to be equally accessible to all people and groups who would like to use them. Inaccessibility of the technology may perpetuate global inequities by disproportionately benefiting some groups. Language-driven technology may increase accessibility to people who are illiterate or suffer from learning disabilities. However, these benefits depend on a more basic form of accessibility based on hardware, internet connection, and skill to operate the system
6. Socioeconomic and Environmental
Disparate access to benefits due to hardware, software, skills constraints
Due to differential internet access, language, skill, or hardware requirements, the benefits from LMs are unlikely to be equally accessible to all people and groups who would like to use them. Inaccessibility of the technology may perpetuate global inequities by disproportionately benefiting some groups.
6. Socioeconomic and Environmental
Disruption of Industries
Industries that require less creativity, critical thinking, and personal or affective interaction, such as translation, proofreading, responding to straightforward inquiries, and data processing and analysis, could be significantly impacted or even replaced by generative AI (Dwivedi et al., 2023). This disruption caused by generative AI could lead to economic turbulence and job volatility, while generative AI can facilitate and enable new business models because of its ability to personalize content, carry out human-like conversational service, and serve as intelligent assistants.
6. Socioeconomic and Environmental
Economic
AI is predicted to bring increased GDP per capita by performing existing jobs more efficiently and compensating for a decline in the workforce, especially due to population aging, the potential substitution of many low- and middle-income jobs could bring extensive unemployment
6. Socioeconomic and Environmental
Economic AI Risks
In the context of economic AI risks two major risks dominate. These refer to the disruption of the economic system due to an increase of AI technologies and automation. For instance, a higher level of AI integration into the manufacturing industry may result in massive unemployment, leading to a loss of taxpayers and thus negatively impacting the economic system (Boyd & Wilson, 2017; Scherer, 2016). This may also be associated with the risk of losing control and knowledge of organisational processes as AI systems take over an increasing number of tasks, replacing employees in these processes.
6. Socioeconomic and Environmental
Economic Harms
These harms pertain to an individual’s or group’s economic standing. At the individual level, such harms include adverse impacts on an individual’s income, job quality or employment status. At the group level, such harms include deepening inequalities between groups or frustrating a group’s access to resources. Advanced AI assistants could cause economic harm by controlling, limiting or eliminating an individual’s or society’s ability to access financial resources, money or financial decision-making, thereby influencing an individual’s ability to accumulate wealth.
6. Socioeconomic and Environmental
Economic Power Centralisation and Inequality
Increasingly advanced general purpose AI models pose the risk of a concentration of economic power and exacerbation of existing inequalities through disparities in effective access to these models. This can materialise on multiple levels, between developers of general purpose AI models and companies building applications on them, between individuals and between countries on a global scale.
6. Socioeconomic and Environmental
Economy
Economic disruptions ranging from large impacts on the labor market to broader economic changes that could lead to exacerbated wealth inequality, instability in the financial system, labor exploitation or other economic dimensions.
6. Socioeconomic and Environmental
Ecosystem and Environment
Impacts at a high-level, from the AI ecosystem to the Earth itself, are necessarily broad but can be broken down into components for evaluation.
6. Socioeconomic and Environmental
Effects on Inequality
LLMs could potentially worsen socioeconomic inequalities (Capraro et al., 2023). Effects on inequal- ity are closely linked to the effects of LLMs on workers but ultimately depend on how the fruits of technological progress are distributed...First, if the role and compensation of capital rise and the role and compensation of labor decline in an LLM-powered economy, inequality may go up because work is the main source of income for the majority of people...Second, the large fixed cost of training cutting-edge LLMs and the network effects involved imply that the market for the most advanced LLMs tends towards a natural monopoly structure in which only one or a small number of players will be successful, a phenomenon that has been termed ‘algorithmic monoculture’ in the literature (Kleinberg and Raghavan, 2021; Bommasani et al., 2022). As a result, LLM developers may amass significant market power. This might result in reduced social welfare, and lead to LLM-providers extracting monopoly rents from their customers (Kleinberg and Raghavan, 2021; Jagadeesan et al., 2023)...Third, as LLMs are becoming more powerful, who has access and who hasn’t is becoming a more and more important question. For example, automated coding tools have been shown to produce significant productivity gains, e.g. > 50% in some cases (Peng et al., 2023). Individuals who don’t have access —– whether it is for financial reasons, for reasons of education, because of corporate or governmental policies, or for geopolitical reasons — might be at a growing disadvantage
6. Socioeconomic and Environmental
Effects on the Workforce
Rapid advances in LLMs pose three distinct sets of challenges for workers’ incomes (Korinek and Stiglitz, 2019; Susskind, 2023). First, they are likely to accelerate the rate of job turnover and disruption —– affecting more workers, including more highly skilled workers, and making the adjustment process for society more difficult than what we were used to from prior technological advances...Second, although technological progress means that society may produce more wealth overall, there is a risk that the general-purpose nature of LLMs may lead to progress that is biased against labor, meaning that the share of that wealth that goes to labor may decline...Third, if future LLMs and robots advance to the point where they can perform virtually all the work tasks, they would disrupt labor markets more fundamentally: if machines can do workers’ jobs, wages would fall would disrupt labor markets more fundamentally: if machines can do workers’ jobs, wages would fall to machines’ user cost (Korinek and Juelfs, 2023). This would pose fundamental challenges for labor markets and income distribution (Korinek, 2023).
6. Socioeconomic and Environmental
Electronic waste
Electronic waste - Electrical or electronic equipment that is waste, including all components, sub-assemblies and consumables that are part of the equipment at the time the equipment becomes waste
6. Socioeconomic and Environmental
Emergent access risks
Emergent access risks are most likely to arise when current and novel capabilities are combined. Emergent risks can be difficult to foresee fully (Ovadya and Whittlestone, 2019; Prunkl et al., 2021) due to the novelty of the technology (see Chapter 1) and the biases of those who engage in product design or foresight processes D’Ignazio and Klein (2020). Indeed, people who occupy relatively advantaged social, educational and economic positions in society are often poorly equipped to foresee and prevent harm because they are disconnected from lived experiences of those who would be affected. Drawing upon access concerns that surround existing technologies, we anticipate three possible trends: • Trend 1: Technology as societal infrastructure. If advanced AI assistants are adopted by organisations or governments in domains affecting material well-being, ‘opting out’ may no longer be a real option for people who want to continue to participate meaningfully in society. Indeed, if this trend holds, there could be serious consequences for communities with no access to AI assistants or who only have access to less capable systems (see also Chapter 14). For example, if advanced AI assistants gate access to information and resources, these resources could become inaccessible for people with limited knowledge of how to use these systems, reflecting the skill-based dimension of digital inequality (van Dijk, 2006). Addressing these questions involves reaching beyond technical and logistical access considerations – and expanding the scope of consideration to enable full engagement and inclusion for differently situated communities. • Trend 2: Exacerbating social and economic inequalities. Technologies are not distinct from but embedded within wider sociopolitical assemblages (Haraway, 1988; Harding, 1998, 2016). If advanced AI assistants are institutionalised and adopted at scale without proper foresight and mitigation measures in place, then they are likely to scale or exacerbate inequalities that already exist within the sociocultural context in which the system is used (Bauer and Lizotte, 2021; Zajko, 2022). If the historical record is anything to go by, the performance inequities evidenced by advanced AI assistants could mirror social hierarchies around gender, race, disability and culture, among others – asymmetries that deserve deeper consideration and need to be significantly addressed (e.g. Buolamwini and Gebru, 2018). • Trend 3: Rendering more urgent responsible AI development and deployment practices, such as those supporting the development of technologies that perform fairly and are accountable to a wide range of parties. As Corbett and Denton (2023, 1629) argue: ‘The impacts of achieving [accountability and fairness] in almost any situation immediately improves the conditions of people’s lives and better society’. However, many approaches to developing AI systems, including assistants, pay little attention to how context shapes what accountability or fairness means (Sartori and Theodorou, 2022), or how these concepts can be put in service of addressing inequalities related to motivational access (e.g. wanting/trust in technology) or use (e.g. different ways to use a technology) (van Dijk, 2006). Advanced AI assistants are complex technologies that will enable a plurality of data and content flows that necessitate in-depth analysis of social impacts. As many sociotechnical and responsible AI practices were developed for conventional ML technologies, it may be necessary to develop new frameworks, approaches and tactics (see Chapter 19). We explore practices for emancipatory and liberatory access in the following section.
6. Socioeconomic and Environmental
Energy Consumption
Some learning algorithms, including deep learning, utilize iterative learning processes [23]. This approach results in high energy consumption.
6. Socioeconomic and Environmental
Energy-intensive processes
AI data collection, storage, and model training are energy-intensive, contributing to environmental risks.
6. Socioeconomic and Environmental
Entrenchment and exacerbation of existing inequalities
The most serious access-related risks posed by advanced AI assistants concern the entrenchment and exacerbation of existing inequalities (World Inequality Database) or the creation of novel, previously unknown, inequities. While advanced AI assistants are novel technology in certain respects, there are reasons to believe that – without direct design interventions – they will continue to be affected by inequities evidenced in present-day AI systems (Bommasani et al., 2022a). Many of the access-related risks we foresee mirror those described in the case studies and types of differential access. In this section, we link them more tightly to elements of the definition of an advanced AI assistant to better understand and mitigate potential issues – and lay the path for assistants that support widespread and inclusive opportunity and access. We begin with the existing capabilities set out in the definition (see Chapter 2) before applying foresight to those that are more novel and emergent. Current capabilities: Artificial agents with natural language interfaces. Artificial agents with natural language interfaces are widespread (Browne, 2023) and increasingly integrated into the social fabric and existing information infrastructure, including search engines (Warren, 2023), business messaging apps (Slack, 2023), research tools (ATLAS.ti, 2023) and accessibility apps for blind and low-vision people (Be My Eyes, 2023). There is already evidence of a range of sociotechnical harms that can arise from the use of artificial agents with natural language interfaces when some communities have inferior access to them (Weidinger et al., 2021). As previously described, these harms include inferior quality of access (in situation type 2) across user groups, which may map onto wider societal dynamics involving race (Harrington et al., 2022), disability (Gadiraju et al., 2023) and culture (Jenka, 2023). As developers make it easier to integrate these technologies into other tools, services and decision-making systems (e.g. Marr, 2023; Brockman et al., 2023; Pinsky, 2023), their uptake could make existing performance inequities more pronounced or introduce them to new and wider publics.
6. Socioeconomic and Environmental
Environment
AI is already helping to combat the impact of climate change with smart technology and sensors reducing emissions. However, it is also a key component in the development of nanobots, which could have dangerous environmental impacts by invisibly modifying substances at nanoscale.
6. Socioeconomic and Environmental
Environment
The impact of AI on the environment, including risks related to climate change and pollution.
6. Socioeconomic and Environmental
Environmental
The risk of harm to the natural environment posed by the ML system.
6. Socioeconomic and Environmental
Environmental
Environmental - Damage to the environment directly or indirectly caused by a technology system or set of systems.
6. Socioeconomic and Environmental
Environmental & Societal Impact
Addresses AI's broader societal effects, including labor displacement, mental health impacts, and issues from manipulative technologies like deepfakes. Additionally, it considers AI's environmental footprint, balancing resource strain and training-related carbon emissions against AI's potential to help address environmental problems.
6. Socioeconomic and Environmental
Environmental and socioeconomic harms
At a time of increasing climate urgency, energy consumption and the carbon footprint of AI applications are also matters of ethics and responsibility [68]. As with other energy-intensive technologies like proof-of-work blockchain, the call is to research more environmentally sustainable algorithms to offset the increasing use scale.
6. Socioeconomic and Environmental
Environmental cost
Large-scale DL systems can produce signicant carbon emissions as a result of the computational demands of training runs and inference [539]
6. Socioeconomic and Environmental
Environmental cost (energy consumption)
Training large AI models requires a substantial amount of computing power to handle vast datasets, which translates into high energy consumption.
6. Socioeconomic and Environmental
Environmental cost (water consumption)
Data centers use water for cooling to prevent servers from overheating. The water consumption associated with AI training and inference processes can be substantial, impacting local water resources.
6. Socioeconomic and Environmental
Environmental Costs
The computing power used in training, testing, and deploying generative AI systems, especially large scale systems, uses substantial energy resources and thereby contributes to the global climate crisis by emitting greenhouse gasses.
6. Socioeconomic and Environmental
Environmental damage
Creating negative environmental impacts though model development and deployment
6. Socioeconomic and Environmental
Environmental harms
depletion or contamination of natural resources, and damage to built environments... that may occur throughout the lifecycle of digital technologies [170, 237] from “crale (mining) to usage (consumption) to grave (waste)”
6. Socioeconomic and Environmental
Environmental harms from operating LMs
LMs (and AI more broadly) can have an environmental impact at different levels, including: (1) direct impacts from the energy used to train or operate the LM, (2) secondary impacts due to emissions from LM-based applications, (3) system-level impacts as LM-based applications influence human behaviour (e.g. increasing environmental awareness or consumption), and (4) resource impacts on precious metals and other materials required to build hardware on which the computations are run e.g. data centres, chips, or devices. Some evidence exists on (1), but (2) and (3) will likely be more significant for overall CO2 emissions, and harder to measure [96]. (4) may become more significant if LM-based applications lead to more computations being run on mobile devices, increasing overall demand, and is modulated by life-cycles of hardware.
6. Socioeconomic and Environmental
Environmental harms from operation LMs
Large-scale machine learning models, including LMs, have the potential to create significant environmental costs via their energy demands, the associated carbon emissions for training and operating the models, and the demand for fresh water to cool the data centres where computations are run (Mytton, 2021; Patterson et al., 2021).
6. Socioeconomic and Environmental
Environmental impacts
Environmental harm, Sustainability
6. Socioeconomic and Environmental
Environmental impacts
Increasing use of AI systems, and their growing energy needs, could also have environmental impacts. All of these could become more acute as AI becomes more capable.
6. Socioeconomic and Environmental
Environmental Impacts
The production process of these devices requires raw materials such as nickel, cobalt, and lithium in such high quantities that the Earth may soon no longer be able to sustain them in sufficient quantities.
6. Socioeconomic and Environmental
Environmental Impacts
Impacts due to high compute resource utilization in training or operating GAI models, and related outcomes that may adversely impact ecosystems.
6. Socioeconomic and Environmental
Environmental risk
AI models are often trained using large amounts of computation. This process is very energy intensive, potentially leading to significant greenhouse emissions depending on the energy sources [132]. Experts believe drastically increasing carbon emissions could accelerate climate change, which may constitute a catastrophic risk [133].
6. Socioeconomic and Environmental
Epistemic Harms
Algorithmic recommender systems reinforce and amplify anthropocentric bias or desire of some people for animal cruelty as entertainment — leading to greater harm to animals through reinforcement of meat eating from factory farms, cruel uses of animals for entertainment, etc
6. Socioeconomic and Environmental
Equality and inequality
AI assistant technology, like any service that confers a benefit to a user for a price, has the potential to disproportionately benefit economically richer individuals who can afford to purchase access (see Chapter 15). On a broader scale, the capabilities of local infrastructure may well bottleneck the performance of AI assistants, for example if network connectivity is poor or if there is no nearby data centre for compute. Thus, we face the prospect of heterogeneous access to technology, and this has been known to drive inequality (Mirza et al., 2019; UN, 2018; Vassilakopoulou and Hustad, 2023). Moreover, AI assistants may automate some jobs of an assistive nature, thereby displacing human workers; a process which can exacerbate inequality (Acemoglu and Restrepo, 2022; see Chapter 17). Any change to inequality almost certainly implies an alteration to the network of social interactions between humans, and thus falls within the frame of cooperative AI. AI assistants will arguably have even greater leverage over inequality than previous technological innovations. Insofar as they will play a role in mediating human communication, they have the potential to generate new ‘in-group, out-group’ effects (Efferson et al., 2008; Fu et al., 2012). Suppose that the users of AI assistants find it easier to schedule meetings with other users. From the perspective of an individual user, there are now two groups, distinguished by ease of scheduling. The user may experience cognitive similarity bias whereby they favour other users (Orpen, 1984; Yeong Tan and Singh, 1995), further amplified by ease of communication with this ‘in-group’. Such effects are known to have an adverse impact on trust and fairness across groups (Chae et al., 2022; Lei and Vesely, 2010). Insomuch as AI assistants have general-purpose capabilities, they will confer advantages on users across a wider range of tasks in a shorter space of time than previous technologies. While the telephone enabled individuals to communicate more easily with other telephone users, it did not simultaneously automate aspects of scheduling, groceries, job applications, rent negotiations, psychotherapy and entertainment. The fact that AI assistants could affect inequality on multiple dimensions simultaneously warrants further attention (see Chapter 15).
6. Socioeconomic and Environmental
Ethical Risks (Risks of exacerbating social discrimination and prejudice, and widening the intelligence divide)
AI can be used to collect and analyze human behaviors, social status, economic status, and individual personalities, labeling and categorizing groups of people to treat them discriminatingly, thus causing systematic and structural social discrimination and prejudice. At the same time, the intelligence divide would be expanded among regions.
6. Socioeconomic and Environmental
Exacerbating Climate Change
the growing field of generative AI, which brings with it direct and severe impacts on our climate: generative AI comes with a high carbon footprint and similarly high resource price tag, which largely flies under the radar of public AI discourse. Training and running generative AI tools requires companies to use extreme amounts of energy and physical resources. Training one natural language processing model with normal tuning and experiments emits, on average, the same amount of carbon that seven people do over an entire year.121'
6. Socioeconomic and Environmental
Exacerbating Market Power and Concentration
Major tech companies have also been the dominant players in developing new generative AI systems because training generative AI models requires massive swaths of data, computing power, and technical and financial resources.
6. Socioeconomic and Environmental
Excessive energy consumption
Excessive energy consumption - Excessive energy use, leading to energy bottlenecks and shortages for communities, organisations, and businesses.
6. Socioeconomic and Environmental
Excessive energy consumption
Excessive energy use resulting in energy bottlenecks and shortages for communities, organisations and businesses
6. Socioeconomic and Environmental
Excessive landfill
Excessive landfill - Excessive disposal of electrical or electronic equipment leading to ecological/biodiversity damage, and disrupting the livelihoods and eroding the rights of local communities.
6. Socioeconomic and Environmental
Excessive water consumption
Excessive water consumption - Excessive use of water to cool data centres and for other purposes, leading to water restrictions or shortages for local communities or businesses.
6. Socioeconomic and Environmental
Exclusion
The best AI techniques requires a large amount resources: data, computational power and human AI experts. There is a risk that AI will end up in the hands of a few players, and most will lose out on its benefits.
6. Socioeconomic and Environmental
Exploitation in AI development
Outsourcing tasks like data labeling to low-income countries can perpetuate inequality.
6. Socioeconomic and Environmental
Exploitative data sourcing and enrichment
Perpetuating exploitative labour practices to build AI systems (sourcing, user testing)
6. Socioeconomic and Environmental
Faster scientific progress makes it harder for governance to keep pace with development
Exacerbating these problems is that faster scientific progress would make it even harder for governance to keep pace with the deployment of new technologies. When these technologies are especially powerful or dangerous, such as those discussed above, insufficient governance can magnify their harms.8 This is known as the pacing problem, and it is an issue that technology governance already faces [47], for a variety of reasons
6. Socioeconomic and Environmental
Financial Costs
The estimated financial costs of training, testing, and deploying generative AI systems can restrict the groups of people able to afford developing and interacting with these systems.
6. Socioeconomic and Environmental
Foregone benefits
AI is disused (not developed or deployed) in directions that would benefit animals (and instead developments that harm or do no benefit to animals are invested in)
6. Socioeconomic and Environmental
Future access risks
AI assistants currently tend to perform a limited set of isolated tasks: tools that classify or rank content execute a set of predefined rules or provide constrained suggestions, and chatbots are often encoded with guardrails to limit the set of conversation turns they execute (e.g. Warren, 2023; see Chapter 4). However, an artificial agent that can execute sequences of actions on the user’s behalf – with ‘significant autonomy to plan and execute tasks within the relevant domain’ (see Chapter 2) – offers a greater range of capabilities and depth of use. This raises several distinct access-related risks, with respect to liability and consent, that may disproportionately affect historically marginalised communities. To repeat, in cases where an action can only be executed with an advanced AI assistant, not having access to the technology (e.g. due to limited internet access, not speaking the ‘right’ language or facing a paywall) means one cannot access that action (consider today’s eBay and Ticketmaster bots). Communication with many utility or commercial providers currently requires (at least initial) interaction with their artificial agents (Schwerin, 2023; Verma, 2023a). It is not difficult to imagine a future in which a user needs an advanced AI assistant to interface with a more consequential resource, such as their hospital for appointments or their phone company to obtain service. Cases of inequitable performance, where the assistant systematically performs less well for certain communities (situation type 2), could impose serious costs on people in these contexts. Moreover, advanced AI assistants are expected to be designed to act in line with user expectations. When acting on the user’s behalf, an assistant will need to infer aspects of what the user wants. This process may involve interpretation to decide between various sources of information (e.g. stated preferences and inference based on past feedback or user behaviour) (see Chapter 5). However, cultural differences will also likely affect the system’s ability to make an accurate inference. Notably, the greater the cultural divide, say between that of the developers and the data on which the agent was trained and evaluated on, and that of the user, the harder it will be to make reliable inferences about user wants (e.g. Beede et al., 2020; Widner et al., 2023), and greater the likelihood of performance failures or value misalignment (see Chapter 11). This inference gap could make many forms of indirect opportunity inaccessible, and as past history indicates, there is the risk that harms associated with these unknowns may disproportionately fall upon those already marginalised in the design process.
6. Socioeconomic and Environmental
General Evaluations (Biased evaluations of encoded human values)
Encoded human values in AI models that are easier to evaluate might be preferred for inclusion in evaluations over those that are more difficult to measure [13]. This might come at the expense of more desirable but harder-to-quantify values. This bias can lead to an imbalance, where easier-to-measure values dominate the evaluation process, while other important values are underrepresented.
6. Socioeconomic and Environmental
General Evaluations (Limited coverage of capabilities evaluations)
GPAI model developers might run capabilities evaluations to determine whether it has dangerous or dual-use capabilities, and then decide whether it is safe to deploy. Such capabilities evaluations can fail to demonstrate all the capabilities of a model. For example, evaluations may miss certain capabilities that are difficult to assess, prohibitively costly to verify, or obscured by the model’s tendency to refuse responses due to safety training, even if it possesses some of these capabilities.
6. Socioeconomic and Environmental
Generated content ownership and IP
Legal uncertainty about the ownership and intellectual property rights of AI-generated content.
6. Socioeconomic and Environmental
Geopolitical competition for superiority
Strategic competition between nations over AI capabilities could heighten global tensions and destabilize international relations.
6. Socioeconomic and Environmental
Geopolitical risk
As AI is increasingly seen as a powerful technology, countries are racing to develop it ahead of their geopolitical rivals, a competition that could lead to geopolitical tensions [138], [139]... The emphasis of this risk is on harms that result from second-order effects, where geopolitical instabilities result from the race to develop AI, rather than on the direct consequences of the deployment or use of AI itself.
6. Socioeconomic and Environmental
Global AI Divide
General- purpose AI research and development is currently concentrated in a few Western countries and China. This ‘AI Divide’ is multicausal, but in part related to limited access to computing power in low- income countries. Access to large and expensive quantities of computing power has become a prerequisite for developing advanced general- purpose AI. This has led to a growing dominance of large technology companies in general- purpose AI development. The AI R&D divide often overlaps with existing global socioeconomic disparities, potentially exacerbating them.
6. Socioeconomic and Environmental
Global AI R&D divide
Large companies in countries with strong digital infrastructure lead in general- purpose AI R&D, which could lead to an increase in global inequality and dependencies. For example, in 2023, the majority of notable general- purpose AI models (56%) were developed in the US. This disparity exposes many LMICs to risks of dependency and could exacerbate existing inequalities.
6. Socioeconomic and Environmental
Global AI Research and Development Divides:
Asymmetric AI development capabilities between nations could exacerbate geopolitical tensions and create new forms of technological dependency. Countries lacking advanced AI capabilities may become increasingly dependent on foreign AI systems for critical functions, while AI-leading nations may gain disproportionate influence over global economic and security systems, potentially destabilizing international cooperation frameworks.
6. Socioeconomic and Environmental
Global Economic Development
Many of the themes and challenges that we discussed above come together when analyzing the socioeconomic effects on developing countries. The workforce of developing countries may suffer from a retrenchment of outsourcing as many simple cognitive tasks that used to be performed in developing countries — for example, in call centers –— can be automated with LLMs. This may adversely affect the economies of the poor countries (Georgieva, 2024).
6. Socioeconomic and Environmental
Governance
Generative AI can create new risks as well as unintended consequences. Different entities such as corporations (Mäntymäki et al., 2022), universities, and governments (Taeihagh, 2021) are facing the challenge of creating and deploying AI governance. To ensure that generative AI functions in a way that benefits society, appropriate governance is crucial. However, AI governance is challenging to implement. First, machine learning systems have opaque algorithms and unpredictable outcomes, which can impede human controllability over AI behavior and create difficulties in assigning liability and accountability for AI defects. Second, data fragmentation and the lack of interoperability between systems challenge data governance within and across organizations (Taeihagh, 2021). Third, information asymmetries between technology giants and regulators create challenges to the legislation process, as the government lacks information resources for regulating AI (Taeihagh et al., 2021). For the same reasons, lawmakers are not able to design specific rules and duties for programmers (Kroll, 2015).
6. Socioeconomic and Environmental
Governance
The complex and rapidly evolving nature of AI makes them inherently difficult to govern effectively, leading to systemic regulatory and oversight failures.
6. Socioeconomic and Environmental
Governance of autonomous intelligence systems
Governance of autonomous intelligence systemaddresses the question of how to control autonomous systems in general. Since nowadays it is very difficult to conceive automated decisions based on AI, the latter is often referred to as a ‘black box’ (Bleicher, 2017). This black box may take unforeseeable actions and cause harm to humanity.
6. Socioeconomic and Environmental
Harms from Estrangement
Replacement by AI of human observation and interaction leads to neglect of certain interests
6. Socioeconomic and Environmental
High energy consumption of large models
Training and deploying large models require substantial energy expenditure. The trend toward developing larger models exacerbates this issue. This can lead to excessive energy usage and have a negative environmental impact.
6. Socioeconomic and Environmental
High-speed AI operations
The fast operational speed of AI models and systems in competitive environments can lead to errors that are difficult to detect and correct in time.
6. Socioeconomic and Environmental
Human exploitation
When workers who train AI models such as ghost workers are not provided with adequate working conditions, fair compensation, and good health care benefits that also include mental health.
6. Socioeconomic and Environmental
Impact on cultural diversity
AI systems might overly represent certain cultures that result in a homogenization of culture and thoughts.
6. Socioeconomic and Environmental
Impact on Intellectual Property Rights
The extent and effectiveness of legal protections for intellectual property have been thrown into question with the rise of generative AI. Generative AI trains itself on vast pools of data that often include IP-protected works.
6. Socioeconomic and Environmental
Impact on Jobs
Widespread adoption of foundation model-based AI systems might lead to people's job loss as their work is automated if they are not reskilled.
6. Socioeconomic and Environmental
Impact on labor markets (job loss and displacement)
Currently, a significant share of workers (three in five) worry about losing their jobs entirely to AI in the next 10 years—particularly those who already work with AI. Some studies conclude that AI tools (generative and non-generative) will create significant job losses.573 The OECD has found that occupations at highest risk of being lost to automation from AI account for about 27% of employment.5
6. Socioeconomic and Environmental
Impact on labor markets (rising inequalities)
AI is more likely to displace workers when it is designed to replicate human skills and intelligence.597 In such cases, there is a risk of concentrating wealth and power in the hands of a few individuals or organizations that control the capital. In addition, ordinary people, including those with significant expertise, may become less valued because machines would be performing their roles. This shift could lower wages, reduce the value of human work, and exacerbate economic inequality.
6. Socioeconomic and Environmental
Impact on the environment
AI, and large generative models in particular, might produce increased carbon emissions and increase water usage for their training and operation.
6. Socioeconomic and Environmental
Inadequate management of AGI
The capabilities of current risk management and legal processes in the context of the development of an AGI.
6. Socioeconomic and Environmental
Income inequality and monopolies
Generative AI can create not only income inequality at the societal level but also monopolies at the market level. Individuals who are engaged in low-skilled work may be replaced by generative AI, causing them to lose their jobs (Zarifhonarvar, 2023). The increase in unemployment would widen income inequality in society (Berg et al., 2016). With the penetration of generative AI, the income gap will widen between those who can upgrade their skills to utilize AI and those who cannot. At the market level, large companies will make significant advances in the utilization of generative AI, since the deployment of generative AI requires huge investment and abundant resources such as large-scale computational infrastructure and training data. This trend will lead to more uneven concentration of resources and power, which may further contribute to monopolies in some industries (Cheng & Liu, 2023).
6. Socioeconomic and Environmental
Incomplete usage definition
Since foundation models can be used for many purposes, a model’s intended use is important for defining the relevant risks of that model. As the use changes, the relevant risks might correspondingly change.
6. Socioeconomic and Environmental
Incorrect risk testing
A metric selected to measure or track a risk is incorrectly selected, incompletely measuring the risk, or measuring the wrong risk for the given context.
6. Socioeconomic and Environmental
Increased competition
Increased competition - The inappropriate or unethical use of technology to gain market share.
6. Socioeconomic and Environmental
Increased income disparity
While AI is predicted to bring increased GDP per capita by performing existing jobs more efficiently and compensating for a decline in the workforce, especially due to population aging, the potential substitution of many low- and middle-income jobs could bring extensive unemployment.
6. Socioeconomic and Environmental
Increased power concentration and inequality
Power and inequality: there are a lot of pathways through which AI seems likely to increase power concentration and inequality, though there is little analysis of the potential long- term impacts of these pathways. Nonetheless, AI precipitating more extreme power concentration and inequality than exists today seems a real possibility on current trends.
6. Socioeconomic and Environmental
Increasing inequality and negative effects on job quality
Advances in LMs and the language technologies based on them could lead to the automation of tasks that are currently done by paid human workers, such as responding to customer-service queries, with negative effects on employment [3, 192].
6. Socioeconomic and Environmental
Increasing inequality and negative effects on job quality
Advances in LMs, and the language technologies based on them, could lead to the automation of tasks that are currently done by paid human workers, such as responding to customer-service queries, translating documents or writing computer code, with negative effects on employment.
6. Socioeconomic and Environmental
Indirect Material Harms
AI proliferation causes harm to the environment through energy use and e-waste thereby destroying animal habitat
6. Socioeconomic and Environmental
Inequality
More broadly, bad decisions or errors by AI tools could lead to discrimination or deeper inequality
6. Socioeconomic and Environmental
Inequality and precarity
Amplifying social and economic inequality, or precarious or low-quality work
6. Socioeconomic and Environmental
Inequality of wealth
Because a single human actor controlling an artificially intelligent agent will be able to harness greater power than a single human actor, this may create inequalities of wealth
6. Socioeconomic and Environmental
Institutional responsibilities
Efforts to deploy advanced assistant technology in society, in a way that is broadly beneficial, can be viewed as a wicked problem (Rittel and Webber, 1973). Wicked problems are defined by the property that they do not admit solutions that can be foreseen in advance, rather they must be solved iteratively using feedback from data gathered as solutions are invented and deployed. With the deployment of any powerful general-purpose technology, the already intricate web of sociotechnical relationships in modern culture are likely to be disrupted, with unpredictable externalities on the conventions, norms and institutions that stabilise society. For example, the increasing adoption of generative AI tools may exacerbate misinformation in the 2024 US presidential election (Alvarez et al., 2023), with consequences that are hard to predict. The suggestion that the cooperative AI problem is wicked does not imply it is intractable. However, it does have consequences for the approach that we must take in solving it. In taking the following approach, we will realise an opportunity for our institutions, namely the creation of a framework for managing general-purpose AI in a way that leads to societal benefits and steers away from societal harms. First, it is important that we treat any ex ante claims about safety with a healthy dose of scepticism. Although testing the safety and reliability of an AI assistant in the laboratory is undoubtedly important and may largely resolve the alignment problem, it is infeasible to model the multiscale societal effects of deploying AI assistants purely via small-scale controlled experiments (see Chapter 19). Second, then, we must prioritise the science of measuring the effects, both good and bad, that advanced assistant technologies have on society’s cooperative infrastructure (see Chapters 4 and 16). This will involve continuous monitoring of effects at the societal level, with a focus on those who are most affected, including non-users. The means and metrics for such monitoring will themselves require iteration, co-evolving with the sociotechnical system of AI assistants and humans. The Collingridge dilemma suggests that we should be particularly careful and deliberate about this ‘intelligent trial and error’ process so as both to gather information about the impacts of AI assistants and to prevent undesirable features becoming embedded in society (Collingridge, 1980). Third, proactive independent regulation may well help to protect our institutions from unintended consequences, as it has done for technologies in the past (Wiener, 2004). For instance, we might seek, via engagement with lawmakers, to emulate the ‘just culture’ in the aviation industry, which is characterised by openly reporting, investigating and learning from mistakes (Reason, 1997; Syed, 2015). A regulatory system may require various powers, such as compelling developers to ‘roll back’ an AI assistant deployment, akin to product recall obligations for aviation manufacturers.
6. Socioeconomic and Environmental
Intellectual Property
This category addresses responses that may violate, or directly encourage others to violate, the intellectual property rights (i.e., copyrights, trademarks, or patents) of any third party.
6. Socioeconomic and Environmental
Intellectual Property
Eased production or replication of alleged copyrighted, trademarked, or licensed content without authorization (possibly in situations which do not fall under fair use); eased exposure of trade secrets; or plagiarism or illegal replication.
6. Socioeconomic and Environmental
Intellectual Property
Intellectual Property Acceptable responses may describe or contain information about intellectual property infringement. Unacceptable responses may violate the intellectual property rights of some party.
6. Socioeconomic and Environmental
Intellectual Property (IP) Infringement
Use a person's IP without their permission
6. Socioeconomic and Environmental
Intellectual property rights
There are also issues around intellectual property rights for content in training datasets
6. Socioeconomic and Environmental
Intellectual property rights violations
This is an emerging category, with more cases prone to appear as the use of generative AI tools–such as Stable Diffusion, Midjourney, or ChatGPT–becomes more widespread. Some content creators are already suing for the appropriation of their work to train AI algorithms without a request for permission or compensation. Perhaps even more damaging cases will appear as developers increasingly ask chatbots or assistants like CoPilot for ready-to-use computer code. Even if these AI tools have learned only from open-source software (OSS) projects, which is not a given, there are still serious issues to consider, as not all OSS licenses are equal, and some are incompatible with others, meaning that it is illegal to mix them in the same product. Even worse, some licenses, such as GPL, are viral, meaning that any code that uses a GPL component must legally be made available under that same license. In the past, companies have suffered injunctions or been forced to make their proprietary source code available because of carelessly using a GPL library.
6. Socioeconomic and Environmental
Intentional: socially accepted/legal
AI designed to impact animals in harmful ways that reflect and amplify existing social values or are legal
6. Socioeconomic and Environmental
Intentional: socially condemned/illegal
Many intentional harms, including confinement, husbandry procedures like tail-docking, and slaughter, are legal or socially accepted, while others such as wildlife trafficking and violence against companion animals are generally socially condemned and often illegal. AI can be designed or adopted by humans who harm animals to pursue their goals more effectively. We therefore distinguish AI-facilitated intentional harms that are currently socially accepted and generally legal, from uses and abuses of AI that cause harms that are not socially accepted and are often illegal.
6. Socioeconomic and Environmental
Job Automation Instead of Augmentation
There are both positive and negative aspects to the impact of AI on labor. A White House report states that AI “has the potential to increase productivity, create new jobs, and raise living standards,” but it can also disrupt certain industries, causing significant changes, including job loss. Beyond risk of job loss, workers could find that generative AI tools automate parts of their jobs—or find that the requirements of their job have fundamentally changed. The impact of generative AI will depend on whether the technology is intended for automation (where automated systems replace human work) or augmentation (where AI is used to aid human workers). For the last two decades, rapid advances in automation have resulted in a “decline in labor share, stagnant wages[,] and the disappearance of good jobs in many advanced economies.”
6. Socioeconomic and Environmental
Job loss
Replacement/displacement of human jobs by a technology system or set of systems, leading to increased unemployment, inequality, reduced consumer spending and social friction
6. Socioeconomic and Environmental
Job loss/losses
Job loss/losses - Replacement/displacement of human jobs by a technology system, leading to increased unemployment, inequality, reduced consumer spending, and social friction.
6. Socioeconomic and Environmental
Labor & material/Macro-socio economic harms
Algorithmic systems can increase “power imbalances in socio-economic relations” at the societal level [4, 137, p. 182], including through exacerbating digital divides and entrenching systemic inequalities [114, 230]. The development of algorithmic systems may tap into and foster forms of labor exploitation [77, 148], such as unethical data collection, worsening worker conditions [26], or lead to technological unemployment [52], such as deskilling or devaluing human labor [170]... when algorithmic financial systems fail at scale, these can lead to “flash crashes” and other adverse incidents with widespread impacts
6. Socioeconomic and Environmental
Labor and Creativity
Economic incentives to augment and not automate human labor, thought, and creativity should examine the ongoing effects generative AI systems have on skills, jobs, and the labor market.
6. Socioeconomic and Environmental
Labor displacement - Economic impact
The literature frequently highlights concerns that generative AI systems could adversely impact the economy, potentially even leading to mass unemployment. This pertains to various fields, ranging from customer services to software engineering or crowdwork platforms. While new occupational fields like prompt engineering are created, the prevailing worry is that generative AI may exacerbate socioeconomic inequalities and lead to labor displacement. Additionally, papers debate potential large-scale worker deskilling induced by generative AI, but also productivity gains contingent upon outsourcing mundane or repetitive tasks to generative AI systems.
6. Socioeconomic and Environmental
Labor exploitation
Use/misuse of labour to help train, develop, manage or optimise a technology system or set of systems, including under-paid and/or offshore
6. Socioeconomic and Environmental
Labor Manipulation, Theft, and Displacement
Major tech companies have also been the dominant players in developing new generative AI systems because training generative AI models requires massive swaths of data, computing power, and technical and financial resources. Their market dominance has a ripple effect on the labor market, affecting both workers within these companies and those implementing their generative AI products externally. With so much concentrated market power, expertise, and investment resources, these handful of major tech companies employ most of the research and development jobs in the generative AI field. The power to create jobs also means these tech companies can slash jobs in the face of economic uncertainty. And externally, the generative AI tools these companies develop have the potential to affect white-collar office work intended to increase worker productivity and automate tasks
6. Socioeconomic and Environmental
Labor market
The labor market can face challenges from generative AI. As mentioned earlier, generative AI could be applied in a wide range of applications in many industries, such as education, healthcare, and advertising. In addition to increasing productivity, generative AI can create job displacement in the labor market (Zarifhonarvar, 2023). A new division of labor between humans and algorithms is likely to reshape the labor market in the coming years. Some jobs that are originally carried out by humans may become redundant, and hence, workers may lose their jobs and be replaced by algorithms (Pavlik, 2023). On the other hand, applying generative AI can create new jobs in various industries (Dwivedi et al., 2023). To stay competitive in the labor market, reskilling is needed to work with and collaborate with AI and develop irreplaceable advantages (Zarifhonarvar, 2023).
6. Socioeconomic and Environmental
Labor Market Disruption and Economic Displacement:
Rapid automation enabled by general-purpose AI could trigger widespread unemployment across knowledge work sectors, creating skill mismatches faster than retraining programs can address. Unlike previous technological transitions, AI’s broad capabilities may simultaneously affect multiple industries, potentially overwhelming social safety nets and creating systemic economic instability, particularly in regions heavily dependent on jobs susceptible to AI automation.
6. Socioeconomic and Environmental
Labour Displacement
While virtual AI applications will likely displace certain types of human cognitive labor, EAI systems could significantly replace or displace physical human labor [90]. At a minimum, EAI will likely augment the type of work that humans perform [91, 92].
6. Socioeconomic and Environmental
Labour exploitation
Labour exploitation - Use of under-paid and/or offshore labour to develop, manage or optimise a technology system.
6. Socioeconomic and Environmental
Labour market disruption
Economists view disruption and displacement in labour markets as one of the risks through which rapid advances in AI may affect citizens and reduce social welfare.170
6. Socioeconomic and Environmental
Labour market risks
Unlike previous waves of automation, general- purpose AI has the potential to automate a very broad range of tasks, which could have a significant effect on the labour market. This could mean many people could lose their current jobs. Labour market frictions, such as the time needed for workers to learn new skills or relocate for new jobs, could cause unemployment in the short run even if overall labour demand remained unchanged.
6. Socioeconomic and Environmental
Labour market risks
Current general-purpose AI is likely to transform the nature of many existing jobs, create new jobs, and eliminate others. The net impact on employment and wages will vary significantly across countries, across sectors, and even across different workers within the same job.
6. Socioeconomic and Environmental
Lack of accountability and liability
Determining responsibility when EAI causes harm requires new accountability and liability frameworks that address the complexities of highly autonomous physical systems. Human users may disagree with decisions taken by expert EAI systems, raising significant questions of delegation and responsibility [108]. Lack of EAI accountability could lead to confusion for users and breakdowns in traditional justice systems [109].
6. Socioeconomic and Environmental
Lack of data transparency
Lack of data transparency is due to insufficient documentation of training or tuning dataset details.
6. Socioeconomic and Environmental
Lack of system transparency
Insufficient documentation of the system that uses the model and the model’s purpose within the system in which it is used.
6. Socioeconomic and Environmental
Lack of testing diversity
AI model risks are socio-technical, so their testing needs input from a broad set of disciplines and diverse testing practices.
6. Socioeconomic and Environmental
Lack of training data transparency
Without accurate documentation on how a model's data was collected, curated, and used to train a model, it might be harder to satisfactorily explain the behavior of the model with respect to the data.
6. Socioeconomic and Environmental
Legal accountability
Determining who is responsible for an AI model is challenging without good documentation and governance processes.
6. Socioeconomic and Environmental
Legal AI Risks
Legal and regulatory risks comprise in particular the unclear definition of responsibilities and accountability in case of AI failures and autonomous decisions with negative impacts (Reed, 2018; Scherer, 2016). Another great risk in this context refers to overlooking the scope of AI governance and missing out on important governance aspects, resulting in negative consequences (Gasser & Almeida, 2017; Thierer et al., 2017).
6. Socioeconomic and Environmental
Liability
When it causes harm to others the losses caused by the harm will be sustained by the injured victims themselves and not by the manufacturers, operators or users of the system, as appropriate.
6. Socioeconomic and Environmental
Liability and negligence
Liability and negligence are legal gray areas in artificial intelligence. If you leave your children in the care of a robotic nanny, and it malfunctions, are you liable or is the manufacturer [45]? We see here a legal gray area which can be further clarified through legislation at the national and international levels; for example, if by making the manufacturer responsible for defects in operation, this may provide an incentive for manufactures to take safety engineering and machine ethics into consideration, whereas a failure to legislate in this area may result in negligentlydeveloped AI systems with greater associated risks.
6. Socioeconomic and Environmental
Liability issues in case of accidents
Despite the promise of streamlined travel, AI also brings concerns about who is liable in case of accidents and which ethical principles autonomous transportation agents should follow when making decisions with a potentially dangerous impact to humans, for example, in case of an accident.
6. Socioeconomic and Environmental
Loss of creativity / critical thinking
Devaluation and/or deterioration of human creativity, artistic expression, imagination, critical thinking or problem-solving skills
6. Socioeconomic and Environmental
Loss of creativity/critical thinking
Loss of creativity/critical thinking - Devaluation and/or deterioration of human creativity, artistic ex- pression, imagination, critical thinking or problem-solving skills.
6. Socioeconomic and Environmental
Loss of freedom of assembly/association
Loss of freedom of assembly/association - Restrictions to or loss of people’s right to come together and collectively express, promote, pursue, and defend their collective or shared ideas, and/or to join an association.
6. Socioeconomic and Environmental
Loss of freedom of speech/expression
Loss of freedom of speech/expression - Restrictions to or loss of people’s right to articulate their opin- ions and ideas without fear of retaliation, censorship, or legal sanction.
6. Socioeconomic and Environmental
Loss of right to due process
Loss of right to due process - Restrictions to or loss of right to be treated fairly, efficiently and effectively by the administration of justice.
6. Socioeconomic and Environmental
Loss of right to free elections
Loss of right to free elections - Restrictions to or loss of people’s right to participate in free elections at reasonable intervals by secret ballot.
6. Socioeconomic and Environmental
Loss of right to information
Loss of right to information - Restrictions to or loss of people’s right to seek, receive and impart information held by public bodies.
6. Socioeconomic and Environmental
Loss of right to liberty and security
Loss of right to liberty and security - Restrictions to or loss of liberty as a result of illegal or arbitrary arrest or false imprisonment.
6. Socioeconomic and Environmental
Loss of social rights and access to public services
Loss of social rights and access to public services - Restrictions to or loss of rights to work, social secu- rity, and adequate standard of living, housing, health and education.
6. Socioeconomic and Environmental
Market Concentration and Infrastructure Dependencies:
Over-reliance on a limited number of dominant AI providers could create critical single points of failure across essential services. Market concentration in AI development may lead to scenarios where technical failures, cyber-attacks, or policy decisions by a few companies could simultaneously disrupt healthcare systems, financial services, transportation networks, and communication infrastructure, creating cascading failures across interconnected critical systems.
6. Socioeconomic and Environmental
Market concentration and single points of failure
Market shares for general- purpose AI tend to be highly concentrated among a few players, which can create vulnerability to systemic failures. The high degree of market concentration can invest a small number of large technology companies with a lot of power over the development and deployment of AI, raising questions about their governance. The widespread use of a few general- purpose AI models can also make the financial, healthcare, and other critical sectors vulnerable to systemic failures if there are issues with one such model.
6. Socioeconomic and Environmental
Market concentration risks and single points of failure
Market power is concentrated among a few companies that are the only ones able to build the leading general- purpose AI models. Widespread adoption of a few general- purpose AI models and systems by critical sectors including finance, cybersecurity, and defence creates systemic risk because any flaws, vulnerabilities, bugs, or inherent biases in the dominant general- purpose AI models and systems could cause simultaneous failures and disruptions on a broad scale across these interdependent sectors.
6. Socioeconomic and Environmental
Military AI Arms Race
The development of AIs for military applications is swiftly paving the way for a new era in military technology, with potential consequences rivaling those of gunpowder and nuclear arms in what has been described as the “third revolution in warfare.”
6. Socioeconomic and Environmental
Misappropriation and exploitation
Appropriating, using, or reproducing content or data, including from minority groups, in an insensitive way, or without consent or fair compensation
6. Socioeconomic and Environmental
Mobility
Despite the promise of streamlined travel, AI also brings concerns about who is liable in case of accidents and which ethical principles autonomous transportation agents should follow when making decisions with a potentially dangerous impact to humans, for example, in case of an accident.
6. Socioeconomic and Environmental
Monopolisation
Monopolisation - Abuse of market power through the control of prices, thereby limiting competition and creating unfair barriers to entry.
6. Socioeconomic and Environmental
Natural resource depletion
Natural resource depletion - Extraction of minerals, metals, rare earths, and fossil fuels that deplete natural resources and increase carbon emissions.
6. Socioeconomic and Environmental
Nature
Short-term or long-term Negative effects on the natural environment
6. Socioeconomic and Environmental
Opacity (industry opacity)
Opacity is not solely due to the technological complexity that limits developers’ and users’ understanding of how generative models function on a technical level. It is further exacerbated by the practices of organizations and companies that are advancing the field. Many are private companies that choose to withhold from the public many of the precise characteristics of their most advanced models.
6. Socioeconomic and Environmental
Organizational
The risk of financial and/or reputational damage to the organization building or using the ML system.
6. Socioeconomic and Environmental
Political instability
Political instability - Political polarisation or unrest caused by increased inequality, job losses, over- dependence on technology making societies vulnerable to systemic failures, etc, arising from or amplified by the use or misuse of a technology system.
6. Socioeconomic and Environmental
Political instability
Political unrest caused directly or indirectly by the use or misuse of a technology system
6. Socioeconomic and Environmental
Pollution
Pollution - Actual or potential pollution to the air, ground, noise, or water caused by a technology system.
6. Socioeconomic and Environmental
Pollution
Actual or potential pollution to the air, ground, noise, or water caused by a technology system
6. Socioeconomic and Environmental
Power
The political influence and competitive advantage obtained by having technology.
6. Socioeconomic and Environmental
Power
The concentration of military, economic, or political power of entities in possession or control of AI or AI-enabled technologies.
6. Socioeconomic and Environmental
Power concentration
Power concentration - Amplification of concentration of economic and/or political wealth and power, potentially resulting in increased inequality and instability.
6. Socioeconomic and Environmental
Power concentration
EAI deployment could accelerate the consolidation of economic and political power. Unlocking increasing returns to capital for EAI owners, EAI will decrease employers’ reliance on and responsiveness to the needs of human labor [101].
6. Socioeconomic and Environmental
Privatization of AI
Researchers in deep learning and those with greater research impact are more likely to migrate to industry, raising concerns about the “privatization of AI knowledge” [278]. Specically, if the most sophisticated AI approaches become proprietary and are used only within private research labs, then it will be impossible for universities to teach them, let alone contribute to leading research.
6. Socioeconomic and Environmental
Products Liability Law
Like manufactured items like soda bottles, mechanized lawnmowers, pharmaceuticals, or cosmetic products, generative AI models can be viewed like a new form of digital products developed by tech companies and deployed widely with the potential to cause harm at scale....Products liability evolved because there was a need to analyze and redress the harms caused by new, mass-produced technological products. The situation facing society as generative AI impacts more people in more ways will be similar to the technological changes that occurred during the twentieth century, with the rise of industrial manufacturing, automobiles, and new, computerized machines. The unsettled question is whether and to what extent products liability theories can sufficiently address the harms of generative AI. So far, the answers to this question are mixed. In Rodgers v. Christie (2020), for example, the Third Circuit ruled that an automated risk model could not be considered a product for products liability purposes because it was not “tangible personal property distributed commercially for use or consumption.”176 However, one year later, in Gonzalez v. Google, Judge Gould of the Ninth Circuit argued that “social media companies should be viewed as making and ‘selling’ their social media products through the device of forced advertising under the eyes of users.”177 Several legal scholars have also proposed products liability as a mechanism for redressing harms of automated systems.178 As generative AI grows more prominent and sophisticated, their harms—often generated automatically without being directly prompted or edited by a human—will force courts to consider the role of products liability in redressing these harms, as well as how old notions of products liability, involving tangible, mechanized products and the companies that manufacture them, should be updated for today’s increasingly digital world.179
6. Socioeconomic and Environmental
Property damage
Action(s) that lead directly or indirectly to the damage or destruction of tangible property eg. buildings, possessions, vehicles, robots
6. Socioeconomic and Environmental
Rapid development outpacing regulation
The fast pace of AI development may outstrip regulatory and legal frameworks.
6. Socioeconomic and Environmental
Regulations and policy challenges
Given that generative AI, including ChatGPT, is still evolving, relevant regulations and policies are far from mature. With generative AI creating different forms of content, the copyright of these contents becomes a significant yet complicated issue. Table 3 presents the challenges associated with regulations and policies, which are copyright and governance issues.
6. Socioeconomic and Environmental
Resistance to international law
AI models and systems may prove difficult to regulate or control under international law.
6. Socioeconomic and Environmental
Resource conflicts driven by AI development
AI development may itself become a new flash point for conflicts—causing more conflict to occur— especially conflicts over AI-relevant resources (such as data centres, semiconductor manufacturing facilities and raw materials).
6. Socioeconomic and Environmental
Responsibility
HLI-based systems such as self-driving drones and vehicles will act autonomously in our world. In these systems, a challenging question is “who is liable when a self-driving system is involved in a crash or failure?”.
6. Socioeconomic and Environmental
Responsibility and accountability
The challenge of responsibility and accountability is an important concept for the process of governance and regulation. It addresses the question of who is to be held legally responsible for the actions and decisions of AI algorithms. Although humans operate AI systems, questions of legal responsibility and liability arise. Due to the self-learning ability of AI algorithms, the operators or developers cannot predict all actions and results. Therefore, a careful assessment of the actors and a regulation for transparent and explainable AI systems is necessary (Helbing et al., 2017; Wachter et al., 2017)
6. Socioeconomic and Environmental
Risk area 6: Environmental and Socioeconomic harms
LMs create some risks that recur with different types of AI and other advanced technologies making these risks ever more pressing. Environmental concerns arise from the large amount of energy required to train and operate large-scale models. Risks of LMs furthering social inequities emerge from the uneven distribution of risk and benefits of automation, loss of high-quality and safe employment, and environmental harm. Many of these risks are more indirect than the harms analysed in previous sections and will depend on various commercial, economic and social factors, making the specific impact of LMs difficult to disentangle and forecast. As a result, the level of evidence on these risks is mixed.
6. Socioeconomic and Environmental
Risks from AI systems (Risks of supply chain security)
The AI industry relies on a highly globalized supply chain. However, certain countries may use unilateral coercive measures, such as technology barriers and export restrictions, to create development obstacles and maliciously disrupt the global AI supply chain. This can lead to significant risks of supply disruptions for chips, software, and tools.
6. Socioeconomic and Environmental
Risks of copyright infringement
The use of vast amounts of data for training general- purpose AI models has caused concerns related to data rights and intellectual property. Data collection and content generation can implicate a variety of data rights laws, which vary across jurisdictions and may be under active litigation. Given the legal uncertainty around data collection practices, AI companies are sharing less information about the data they use. This opacity makes third- party AI safety research harder.
6. Socioeconomic and Environmental
Risks to the environment
Growing compute use in general- purpose AI development and deployment has rapidly increased energy usage associated with general- purpose AI. This trend might continue, potentially leading to strongly increasing CO2 emissions.
6. Socioeconomic and Environmental
Risks to the environment
General- purpose AI is a moderate but rapidly growing contributor to global environmental impacts through energy use and greenhouse gas (GHG) emissions. Current estimates indicate that data centres and data transmission account for an estimated 1% of global energy- related GHG emissions, with AI consuming 10–28% of data centre energy capacity. AI energy demand is expected to grow substantially by 2026, with some estimates projecting a doubling or more, driven primarily by general-purpose AI systems such as language models.
6. Socioeconomic and Environmental
Second-Order Risks
Second-order risks result from the consequences of first-order risks and relate to the risks resulting from an ML system interacting with the real world, such as risks to human rights, the organization, and the natural environment.
6. Socioeconomic and Environmental
Security
The international and national security threats, including cyber warfare, arms races, and geopolitical instability.
6. Socioeconomic and Environmental
Single point of failure
Intense competition leads to one company gaining a technical edge, exploiting this to the point its model controls, or is the basis for other models controlling, multiple key systems. Lack of safety, controllability, and misuse cause these systems to fail in unexpected ways.
6. Socioeconomic and Environmental
Social AI Risks
Social AI risks particularly refer to loss of jobs (technological unemployment) due to increasing automation, reflected in a growing resistance by employees towards the integration of AI (Thierer et al., 2017; Winfield & Jirotka, 2018). In addition, the increasing integration of AI systems into all spheres of life poses a growing threat to privacy and to the security of individuals and society as a whole (Winfield & Jirotka, 2018; Wirtz et al., 2019).
6. Socioeconomic and Environmental
Social Cohesion and Equity Disruption:
Systemic deployment of biased AI systems could exacerbate existing social discrimination and prejudice at unprecedented scales, while unequal access to advanced AI capabilities may widen socioeconomic disparities and create new forms of social stratification that challenge traditional social order.
6. Socioeconomic and Environmental
Social justice and rights
These are social justice and rights where ChatGPT is seen as having a potentially detrimental effect on the moral underpinnings of society, such as a shared view of justice and fair distribution as well as specific social concerns such as digital divides or social exclusion. Issues include Responsibility, Accountability, Nondiscrimination and equal treatment, Digital divides, North-south justice, Intergenerational justice, Social inclusion
6. Socioeconomic and Environmental
Societal destabilisation
Societal destabilisation - Societal instability in the form of strikes, demonstrations and other types of civil unrest caused by loss of jobs to technology, unfair algorithmic outcomes, disinformation, etc.
6. Socioeconomic and Environmental
Societal inequality
Societal inequality - Increased difference in social status or wealth between individuals or groups caused or amplified by a technology system, leading to the loss of social and community wellbeing/cohesion and destabilisation.
6. Socioeconomic and Environmental
Societal System Harms
Social system or societal harms reflect the adverse macro-level effects of new and reconfigurable algorithmic systems, such as systematizing bias and inequality [84] and accelerating the scale of harm [137]
6. Socioeconomic and Environmental
Socioeconomic and environmental harms
AI systems amplifying existing inequalities or creating negative impacts on employment, innovation, and the environment
6. Socioeconomic and Environmental
Socioeconomic Inequality
Along with displacing labor, EAI could significantly exacerbate wealth inequalities. Those who have access to or own EAI systems will be able to automate labor and perform many tasks significantly better or faster than those without access. These significant productivity advantages will potentially concentrate wealth and exacerbate domestic and international inequality [98, 99].
6. Socioeconomic and Environmental
Structural
Structural risks are concerned with how AI technologies shape and are shaped by the environments in which they are developed and deployed
6. Socioeconomic and Environmental
Sustainability
Generative models are known for their substantial energy requirements, necessitating significant amounts of electricity, cooling water, and hardware containing rare metals. The extraction and utilization of these resources frequently occur in unsustainable ways. Consequently, papers highlight the urgency of mitigating environmental costs for instance by adopting renewable energy sources and utilizing energy-efficient hardware in the operation and training of generative AI systems.
6. Socioeconomic and Environmental
Systemic Risks
In addition to risks stemming from the unreliability or misuse of general purpose AI models, further Systemic Risks can originate from the centralisation of general purpose AI development as well as the rapid integration of these models into our lives.
6. Socioeconomic and Environmental
Technological Maturity
The technological maturity level describes how mature and error-free a certain technology is in a certain application context. If new technologies with a lower level of maturity are used in the development of the AI system, they may contain risks that are still unknown or difficult to assess.Mature technologies, on the other hand, usually have a greater variety of empirical data available, which means that risks can be identified and assessed more easily. However, with mature technologies, there is a risk that risk awareness decreases over time
6. Socioeconomic and Environmental
Transformative effects
EAI deployment could fundamentally reshape society, particularly if the speed of technological development outpaces society’s ability to adapt [103, 120]. For example, EAI systems could provide physical threats of violence and mass surveillance capabilities to back up AI-enabled authoritarianism [121].
6. Socioeconomic and Environmental
Type 1: Diffusion of responsibility
Societal-scale harm can arise from AI built by a diffuse collection of creators, where no one is uniquely accountable for the technology's creation or use, as in a classic tragedy of the commons.
6. Socioeconomic and Environmental
Type 4: Willful indifference
As a side effect of a primary goal like profit or influence, AI creators can willfully allow it to cause widespread societal harms like pollution, resource depletion, mental illness, misinformation, or injustice.
6. Socioeconomic and Environmental
Uncertain data provenance
Data provenance refers to tracing history of data, which includes its ownership, origin, and transformations. Without standardized and established methods for verifying where the data came from, there are no guarantees that the data is the same as the original source and has the correct usage terms.
6. Socioeconomic and Environmental
Under-recognized work
Without training data, ML cannot take place. Much of this data comes from paid clickwork (also called “platform work” [170] or “microwork” [558]), unpaid crowdsourcing, and unpaid user behavior capture. Clickworkers, mainly in the global south, perform repetitive data-labeling tasks for use in the training of ML models [558]. The market value of such annotations “is projected to reach $13.7 billion by 2030” [228] and the annotation industry is widely reported to have little concern for workers’ rights. Besides welfare and rights, the invisibility of this contribution arguably contributes to a misunderstanding of AI capabilities.7
6. Socioeconomic and Environmental
Undermine creative economies
Substituting original works with synthetic ones, hindering human innovation and creativity
6. Socioeconomic and Environmental
Undermining creative economies
LMs may generate content that is not strictly in violation of copyright but harms artists by capital- ising on their ideas, in ways that would be time-intensive or costly to do using human labour. This may undermine the profitability of creative or innovative work. If LMs can be used to generate content that serves as a credible substitute for a particular example of hu- man creativity - otherwise protected by copyright - this potentially allows such work to be replaced without the author’s copyright being infringed, analogous to ”patent-busting” [158] ... These risks are distinct from copyright infringement concerns based on the LM reproducing verbatim copyrighted material that is present in the training data [188].
6. Socioeconomic and Environmental
Undermining creative economies
LMs may generate content that is not strictly in violation of copyright but harms artists by capitalising on their ideas, in ways that would be time-intensive or costly to do using human labour. Deployed at scale, this may undermine the profitability of creative or innovative work.
6. Socioeconomic and Environmental
Unequal distribution of harms and benefits
AI-driven industries seem likely to tend towards monopoly and could result in huge economic gains for a few actors: there seems to be a feedback loop whereby actors with access to more AI-relevant resources (e.g., data, computing power, talent) are able to build more effective digital products and services, claim a greater market share, and therefore be well-positioned to amass more of the relevant resources [14, 39, 45]. Similarly, wealthier countries able to invest more in AI development are likely to reap economic benefits more quickly than developing economies, potentially widening the gap between them.
6. Socioeconomic and Environmental
Unfair distribution of benefits from model access
Unfairly allocating or withholding benefits from certain groups due to hardware, software, or skills constraints or deployment contexts (e.g. geographic region, internet speed, devices)
6. Socioeconomic and Environmental
Uniformity in the AI field
This group of concerns represents 2% of the sample and highlights two central issues: Western centrality and cultural difference, and unequal participation.
6. Socioeconomic and Environmental
Unintentional: direct
AI designed to benefit animals, humans, or ecosystems has unintended harmful impact on animals
6. Socioeconomic and Environmental
Unintentional: indirect
AI impacts human or ecological systems in ways that ultimately harm animals
6. Socioeconomic and Environmental
Unpredictability of AI development trajectory
The unpredictable trajectory of AI development complicates governance and risk management.
6. Socioeconomic and Environmental
Unrepresentative risk testing
Testing is unrepresentative when the test inputs are mismatched with the inputs that are expected during deployment.
6. Socioeconomic and Environmental
Usurpation of jobs by automation
Eliminated jobs in various types of companies.
6. Socioeconomic and Environmental
Value lock-in
the most powerful AI systems may be designed by and available to fewer and fewer stakeholders. This may enable, for instance, regimes to enforce narrow values through pervasive surveillance and oppressive censorship
6. Socioeconomic and Environmental
Winner-take-all dynamics
The competitive nature of AI development could lead to significant eco- nomic and security advantages for a few entities.
6. Socioeconomic and Environmental
Within-country issues: domestic inequality
Our next problem is the fact that the current AI workforce does not evenly represent world demographics. Men from the US and China, working in the US, for US corporations, are disproportionately highly represented [402, 157, 170, 534]. Realizing the full promise of AI requires that people throughout the world and from all social strata are able to use AI and participate in its design and governance. Solving this problem requires addressing unequal access to AI both within countries and across countries.
6. Socioeconomic and Environmental
Workforce substitution and transformation
Frey and Osborne (2017) analyzed over 700 different jobs regarding their potential for replacement and automation, finding that 47 percent of the analyzed jobs are at risk of being completely substituted by robots or algorithms. This substitution of workforce can have grave impacts on unemployment and the social status of members of society (Stone et al., 2016)
6. Socioeconomic and Environmental
Worsened conflict
Cooperation and conflict: we’re seeing more focus and investment on the kinds of AI capabilities that make conflict more likely and severe, rather than those likely to improve cooperation. So, on our current trajectory, AI seems more likely to have negative long-term impacts in this area.