Back to the MIT repository
6. Socioeconomic and Environmental1 - Pre-deployment

Development of unsafe AGI

The risks associated with the race to develop the first AGI, including the development of poor quality and unsafe AGI, and heightened political and control issues.

Source: MIT AI Risk Repositorymit104

ENTITY

1 - Human

INTENT

3 - Other

TIMING

1 - Pre-deployment

Risk ID

mit104

Domain lineage

6. Socioeconomic and Environmental

262 mapped risks

6.4 > Competitive dynamics

Mitigation strategy

1. Implement global-scale regulatory and coordination frameworks—such as compute accounting, training caps, and an AI Neutral Zone—to structurally de-incentivize the competitive acceleration of AGI development and reorient incentives toward verifiable safety and collective alignment. 2. Mandate the creation and rigorous review of formal **Safety Cases** at defined pre-deployment development checkpoints, requiring affirmative proof of model alignment, capability non-harmfulness (e.g., inability cases), and robustness against sophisticated misuse via comprehensive red-teaming. 3. Establish and enforce quantifiable, auditable technical standards for **Compute Transparency and Limitation**, imposing hard caps on the total computational resources (FLOPs) used in the training and operation of models to create a measurable backstop against the unchecked development of dangerously powerful systems.