Back to the MIT repository
6. Socioeconomic and Environmental3 - Other

Environmental

The risk of harm to the natural environment posed by the ML system.

Source: MIT AI Risk Repositorymit202

ENTITY

2 - AI

INTENT

2 - Unintentional

TIMING

3 - Other

Risk ID

mit202

Domain lineage

6. Socioeconomic and Environmental

262 mapped risks

6.6 > Environmental harm

Mitigation strategy

1. Prioritize **Algorithmic and Hardware Efficiency** by mandating the use of compressed models (e.g., knowledge distillation, quantization), sparse architectures, and energy-efficient computational hardware (e.g., TPUs, NPUs) during design. Furthermore, utilize **Carbon-Aware Scheduling** to align computationally intensive training and inference periods with times of high renewable energy availability on the power grid. 2. Implement a **Systemic Resource Governance** framework designed to anticipate and counteract the **Jevons Paradox (Rebound Effect)**. This requires policy integration—such as carbon taxes, cap-and-trade, or resource taxation—to decouple the efficiency gains realized by the ML system from an overall increase in resource consumption and associated environmental impact. 3. Enforce the development of **Physics-Informed ML Models** by incorporating physical constraints and known conservation laws into algorithms used for environmental prediction (e.g., climate modeling, resource management). This ensures prediction accuracy and reliability, mitigating the risk of errors that could lead to environmental harm, such as unnecessary resource spin-up or unsustainable resource allocation (e.g., overfishing quotas).

ADDITIONAL EVIDENCE

There are three major ways in which ML systems can harm the environment. The first is increased pollution or contribution to climate change due to the system’s consumption of resources. This relates to the energy cost/efficiency during training and inference, hence, the energy efficiency of the chosen algorithm, its implementation, and training procedure are key factors here [5, 113, 171]. Other key factors include the energy efficiency of the system’s computational hardware and the type of power grid powering the ML system since some power sources (e.g., wind turbines) are cleaner than others (e.g. fossil fuels) [85]. The second is the negative effect of ML system’s predictions on the environment and relate to the system’s use case, prediction accuracy, and robustness. For example, an ML system used for server scaling may spin up unnecessary resources due to prediction error, causing an increase in electricity consumption and associated environmental effects. Another ML system may be used to automatically adjust fishing quotas and prediction errors could result in overfishing. Finally, automating a task often results in knock-on effects such as increased usage due to increased accessibility. This is known as the Jevons Paradox [97] or Khazzoom-Brookes postulate [25, 104, 156]. For example, public transit users may adopt private autonomous vehicles and cause a net increase in the number of vehicles on the road [128].