Back to the MIT repository
6. Socioeconomic and Environmental3 - Other

Environmental harms from operation LMs

Large-scale machine learning models, including LMs, have the potential to create significant environmental costs via their energy demands, the associated carbon emissions for training and operating the models, and the demand for fresh water to cool the data centres where computations are run (Mytton, 2021; Patterson et al., 2021).

Source: MIT AI Risk Repositorymit254

ENTITY

2 - AI

INTENT

2 - Unintentional

TIMING

3 - Other

Risk ID

mit254

Domain lineage

6. Socioeconomic and Environmental

262 mapped risks

6.6 > Environmental harm

Mitigation strategy

1. Prioritize the Decarbonization and Efficiency of Operations: Implement aggressive strategies to reduce the overall energy footprint and transition to carbon-free energy sources. This encompasses attaining a low Power Usage Effectiveness (PUE), deploying high-efficiency IT hardware, and securing 100 percent carbon-free electricity through Power Purchase Agreements (PPAs) and grid-aware workload management to align consumption with renewable energy availability. 2. Advance Comprehensive Water Stewardship: Design and operate data centers to minimize reliance on freshwater, particularly in water-stressed regions. Mitigation includes achieving low Water Usage Effectiveness (WUE) through the deployment of water-efficient cooling technologies such as closed-loop liquid cooling, adiabatic cooling, or the utilization of non-potable and reclaimed water sources. 3. Establish Full Lifecycle Transparency and Circularity: Mandate the public reporting of environmental performance metrics, including all three scopes of carbon emissions and water usage, to ensure accountability and drive continuous improvement. Furthermore, incorporate circular economy principles by utilizing low-carbon construction materials for infrastructure and instituting robust programs for equipment refurbishment and certified end-of-life recycling.

ADDITIONAL EVIDENCE

Several environmental risks emerge during or before training - e.g. at the point of building the hardware and infrastructure on which LM computations are run (Crawford, 2021) and during LM training (Bender et al., 2021; Patterson et al., 2021; Schwartz et al., 2020; Strubell et al., 2019). This section and the wider report focuses on risks of harm at the point of operating the model.