Automation, Access and Environmental Harms
Harms that arise from environmental or downstream economic impacts of the language model
ENTITY
2 - AI
INTENT
2 - Unintentional
TIMING
2 - Post-deployment
Risk ID
mit253
Domain lineage
6. Socioeconomic and Environmental
6.0 > Socioeconomic & Environmental
Mitigation strategy
1. Mandate the deployment of specialized, resource-efficient language models and model-compression techniques (e.g., quantization) to minimize the computational intensity and associated carbon footprint of AI inference and deployment, reserving large-scale models only for tasks where the complexity is demonstrably required. 2. Develop and implement "Sustainable Prompting" guidelines, educating end-users and developers to minimize prompt complexity and response length, thereby directly reducing the energy consumption per generative AI query, which has been demonstrated to yield significant operational savings. 3. Establish and enforce comprehensive AI governance frameworks, such as the NIST AI Risk Management Framework (AI RMF), specifically tailored to the environmental and socioeconomic domains to ensure that LLM-driven applications are continuously evaluated for their impact on global equity, labor displacement, and resource depletion across their full lifecycle.
ADDITIONAL EVIDENCE
LMs create risks of broader societal harm that are similar to those generated by other forms of AI or other advanced technologies. Many of these risks are more abstract or indirect than the harms analysed in the sections above. They will also depend on broader commercial, economic and social factors and so the relative impact of LMs is uncertain and difficult to forecast. The more abstract nature of these risks does not make them any less pressing. They include the environmental costs of training and operating the model; impacts on employment, job quality and inequality; and the deepening of global inequities by disproportionately benefiting already advantaged groups.