Back to the MIT repository
6. Socioeconomic and Environmental2 - Post-deployment

Value lock-in

the most powerful AI systems may be designed by and available to fewer and fewer stakeholders. This may enable, for instance, regimes to enforce narrow values through pervasive surveillance and oppressive censorship

Source: MIT AI Risk Repositorymit573

ENTITY

1 - Human

INTENT

1 - Intentional

TIMING

2 - Post-deployment

Risk ID

mit573

Domain lineage

6. Socioeconomic and Environmental

262 mapped risks

6.1 > Power centralization and unfair distribution of benefits

Mitigation strategy

1. Establish and enforce mandatory, comprehensive Artificial Intelligence Management Systems (AIMS), informed by standards such as ISO/IEC 42001, to ensure transparent, ethical, and publicly accountable governance across the entire AI lifecycle, thereby mitigating the risk of exclusive design and deployment by a few stakeholders. 2. Integrate core architectural features into advanced AI systems that promote "epistemic humility" and resist premature or total value lock-in by enabling mechanisms for long-term value reflection, societal deliberation, and the capacity for goal and value revision based on continuously evolving human moral and ethical understanding. 3. Implement strict, internationally coordinated regulatory frameworks—including restricted access controls (e.g., compute monitoring, export controls) and legal liability regimes for developers—to govern the deployment of powerful, general-purpose AI, thereby limiting the ability of narrow regimes to unilaterally leverage these systems for pervasive surveillance and oppressive censorship.