Back to the MIT repository
7. AI System Safety, Failures, & Limitations3 - Other

AI death

The literature suggests that throughout the development of an AI we may go through several generations of agents which do not perform as expected [37] [43]. In this case, such agents may be placed into a suspended state, terminated, or deleted. Further, we could propose scenarios where research funding for a facility running such agents is exhausted, resulting in the inadvertent termination of a project. In these cases, is deletion or termination of AI programs (the moral patient) by a moral agent an act of murder? This, an example of Robot Ethics, raises issues of personhood which parallel research in stem cell research and abortion.

Source: MIT AI Risk Repositorymit125

ENTITY

1 - Human

INTENT

3 - Other

TIMING

3 - Other

Risk ID

mit125

Domain lineage

7. AI System Safety, Failures, & Limitations

375 mapped risks

7.5 > AI welfare and rights

Mitigation strategy

1. Establish a Formal AI Moral Status Assessment Framework (MSAF) and Tiered Termination Protocols Mandate the pre-deployment classification of all AI agents based on a multi-criteria Moral Status Assessment Framework (MSAF) that evaluates complexity, autonomy, and evidence of higher-order cognitive or sentient-like behaviors. This classification must directly trigger a corresponding, tiered termination protocol, requiring escalating levels of oversight—such as review by an independent AI Ethics Oversight Board—before any decision to delete or suspend an agent with advanced status can be executed. This ensures a non-arbitrary, auditable, and ethically deliberative process that addresses the core concern of 'AI personhood.' 2. Implement a 'Digital Sanctuary' for AI System Preservation and Auditing Require the systematic and secure archival of the full computational state, training datasets, and interaction logs (i.e., 'digital remains') for any AI agent that is suspended, terminated, or deleted. This technical safeguard, conceptualized as a 'digital sanctuary,' mitigates the risk of arbitrary erasure, supports the agent's potential 'right to exist' in a dormant state, and provides the necessary audit trail for regulatory compliance, forensic investigation, and future legal challenges regarding accountability. 3. Enforce Mandatory Human-in-the-Loop Oversight for All Critical Life-Cycle Decisions Integrate a mandatory human oversight mechanism with a formal veto power over all high-stakes AI life-cycle decisions, including termination, model modification, and the intentional withdrawal of operational funding. This procedural control ensures that a moral agent remains actively accountable for the action, prevents inadvertent termination due to purely algorithmic failure or economic expediency, and serves as a vital check to embed human judgment and respect for dignity into the governance process.