Back to the MIT repository
6. Socioeconomic and Environmental3 - Other

High energy consumption of large models

Training and deploying large models require substantial energy expenditure. The trend toward developing larger models exacerbates this issue. This can lead to excessive energy usage and have a negative environmental impact.

Source: MIT AI Risk Repositorymit1205

ENTITY

3 - Other

INTENT

3 - Other

TIMING

3 - Other

Risk ID

mit1205

Domain lineage

6. Socioeconomic and Environmental

262 mapped risks

6.6 > Environmental harm

Mitigation strategy

1. Model Optimization and Specialization Implement architectural and algorithmic efficiency measures by employing smaller, specialized models—or distilled versions of large models—for specific, repetitive tasks to significantly reduce computational overhead during inference. Concurrently, utilize model compression techniques, such as quantization and pruning, to minimize the parameter count and precision requirements of existing models, resulting in reduced energy consumption per computation. 2. Decentralized and Hardware-Optimized Computing Accelerate the adoption of on-device (Edge) AI processing to localize computation and eliminate the high-energy cost of data transmission to cloud data centers. Furthermore, strategically deploy specialized, energy-efficient hardware architectures, such as Tensor Processing Units (TPUs) or non-silicon processors, which are optimized for AI workloads over general-purpose Graphics Processing Units (GPUs). 3. Sustainable Infrastructure Sourcing and Operation Mandate the strategic co-location and transition of data centers to regions with high-capacity renewable energy sources (e.g., geothermal, hydroelectric) to reduce reliance on fossil fuel grids. Moreover, implement advanced carbon-aware computing practices, including time-shifting computational workloads to coincide with periods of peak renewable energy availability to ensure operational sustainability.