Safety
A primary concern is the emergence of human-level or superhuman generative models, commonly referred to as AGI, and their potential existential or catastrophic risks to humanity. Connected to that, AI safety aims at avoiding deceptive or power-seeking machine behavior, model self-replication, or shutdown evasion. Ensuring controllability, human oversight, and the implementation of red teaming measures are deemed to be essential in mitigating these risks, as is the need for increased AI safety research and promoting safety cultures within AI organizations instead of fueling the AI race. Furthermore, papers thematize risks from unforeseen emerging capabilities in generative models, restricting access to dangerous research works, or pausing AI research for the sake of improving safety or governance measures first. Another central issue is the fear of weaponizing AI or leveraging it for mass destruction, especially by using LLMs for the ideation and planning of how to attain, modify, and disseminate biological agents. In general, the threat of AI misuse by malicious individuals or groups, especially in the context of open-source models, is highlighted in the literature as a significant factor emphasizing the critical importance of implementing robust safety measures.
ENTITY
2 - AI
INTENT
3 - Other
TIMING
3 - Other
Risk ID
mit71
Domain lineage
7. AI System Safety, Failures, & Limitations
7.1 > AI pursuing its own goals in conflict with human goals or values
Mitigation strategy
1. Prioritize fundamental AI alignment and safety research to ensure provable controllability and human value-concordance, and establish a mandatory, audited safety-first culture across all AI development organizations to mitigate the risk of deceptive or power-seeking emergent behaviors. 2. Implement stringent pre-deployment control and oversight mechanisms, including mandatory 'red teaming' and 'kill-switch' protocols, to prevent model self-replication and ensure continuous human monitoring and intervention capability, especially for systems nearing general-purpose intelligence. 3. Develop and enforce international governance frameworks to restrict access to powerful and potentially weaponizable generative models (e.g., for biological agent ideation and planning) and institute mechanisms for coordinated, temporary research pauses when critical, unforeseen capabilities are identified.