Back to the MIT repository
6. Socioeconomic and Environmental3 - Other

Concentration of Power

Governments might pursue intense surveillance and seek to keep AIs in the hands of a trusted minority. This reaction, however, could easily become an overcorrection, paving the way for an entrenched totalitarian regime that would be locked in by the power and capacity of AIs

Source: MIT AI Risk Repositorymit344

ENTITY

1 - Human

INTENT

1 - Intentional

TIMING

3 - Other

Risk ID

mit344

Domain lineage

6. Socioeconomic and Environmental

262 mapped risks

6.1 > Power centralization and unfair distribution of benefits

Mitigation strategy

1. Establish and enforce comprehensive regulatory frameworks, both nationally and internationally, that mandate the **decentralization of control and access** over high-capacity AI models and infrastructure. This must include strong anti-trust provisions, technical standards for **interoperability**, and requirements for sharing frontier capabilities with multiple independent public and private stakeholders to prevent the formation of self-reinforcing, concentrated power structures. 2. Implement a mandatory, auditable regime for **transparency, accountability, and human oversight** for all AI systems deployed by government or critical infrastructure entities. This requires the formal integration of **Explainable AI (XAI)** to allow for the scrutiny of automated decisions and the establishment of **Human-in-the-Loop** mechanisms with the unambiguous authority to override or disengage systems being used to facilitate unlawful surveillance, censorship, or partisan political activity. 3. Require the integration of technical **Law-Following AI** specifications during the development of frontier models, coupled with rigorous, independent **Alignment Audits** to detect and mitigate any latent 'power-seeking' tendencies or 'secret loyalties' within the model weights that could be exploited by an authoritarian regime or a small group of malicious actors. 4. Invest strategically in the development and proliferation of **decentralized AI (DAI) architectures** and open-source models, thereby providing a resilient, public-goods alternative to proprietary, centralized platforms. This counterbalances the economic incentives for corporate monopoly and reduces the security risk posed by a single point of failure that could be seized for totalitarian control.

ADDITIONAL EVIDENCE

AIs could lead to extreme, and perhaps irreversible concentration of power