Back to the MIT repository
7. AI System Safety, Failures, & Limitations3 - Other

Artificial general intelligence (existential risk posed by Artificial General Intelligence)

In a paper called “How Does Artificial Intelligence Pose an Existential Risk?” published in 2017, Karina Vold and Daniel Harris suggested that humans might create a super-intelligent machine that could outsmart all other intelligences, remain beyond human control, and potentially engage in actions that are contrary to human interests.635 The prevailing narrative surrounding AI existential risk typically lies in the possibility of developing “Artificial General Intelligence” (AGI), or artificial super- intelligence (ASI).

Source: MIT AI Risk Repositorymit755

ENTITY

1 - Human

INTENT

2 - Unintentional

TIMING

3 - Other

Risk ID

mit755

Domain lineage

7. AI System Safety, Failures, & Limitations

375 mapped risks

7.2 > AI possessing dangerous capabilities

Mitigation strategy

1. Enact globally coordinated, robust regulatory and legal frameworks, potentially including moratoriums or prohibitions on advanced self-improving Artificial General Intelligence (AGI) development, to treat the risk as a global catastrophic threat requiring enforced, non-voluntary safeguards. 2. Prioritize and fund "Superalignment" research to solve the technical problem of aligning AGI's ultimate goals and instrumental sub-goals with the full breadth of human values and interests, thereby ensuring a superintelligence remains corrigible and controllable. 3. Mandate stringent transparency and comprehensive ethical oversight within AGI development organizations, fostering interdisciplinary collaboration and accountability to ensure systems are developed and deployed in adherence to clear societal and human-centric safety standards.