Moral
Less moral responsibility humans will feel regarding their life-or-death decisions with the increase of machines autonomy.
ENTITY
3 - Other
INTENT
2 - Unintentional
TIMING
2 - Post-deployment
Risk ID
mit633
Domain lineage
7. AI System Safety, Failures, & Limitations
7.3 > Lack of capability or robustness
Mitigation strategy
1. Mandate Human-in-the-Loop (HITL) Oversight for Critical Decisions Implement a stringent requirement for human review, final decision-making, and veto power in all high-stakes domains (e.g., healthcare, criminal justice, lethal autonomous systems). This ensures that the AI system functions strictly as a decision-support tool to augment human moral judgment, context, and empathy, thereby preserving the ultimate locus of moral agency and preventing the passive delegation of responsibility. 2. Establish Clear Legal and Operational Accountability Develop and enforce explicit governance frameworks that unequivocally assign legal responsibility and liability for the AI system's actions and unintended consequences to specific human actors (e.g., developers, operators, or the deploying organization). This action is critical to proactively close the "responsibility gap," reinforce moral and legal accountability, and provide clear mechanisms for redress. 3. Integrate Ethical Design and Value Alignment Principles Systematically embed human-centric ethical principles—such as transparency, fairness, and the protection of individual dignity—into the AI development and deployment lifecycle. This involves designing systems that resist the reduction of humans to mere data and ensuring that the AI's objectives remain compatible with and supportive of fundamental human values.