Back to the MIT repository
5. Human-Computer Interaction2 - Post-deployment

Personal decision automation capabilities

AI models and systems could decide or influence important personal decisions.

Source: MIT AI Risk Repositorymit1082

ENTITY

2 - AI

INTENT

1 - Intentional

TIMING

2 - Post-deployment

Risk ID

mit1082

Domain lineage

5. Human-Computer Interaction

92 mapped risks

5.2 > Loss of human agency and autonomy

Mitigation strategy

1. **Prioritize Human-in-the-Loop (HITL) and Ultimate Human Accountability:** Implement technical and procedural safeguards to ensure a human decision-maker retains ultimate authority and veto power over AI-recommended personal decisions, especially those with high impact (e.g., financial, medical, legal). Establish formal mechanisms within the AI governance framework to clearly assign accountability to human owners for any adverse outcomes resulting from the AI's influence or action. 2. **Mandate Explainable AI (XAI) and Transparency:** Develop and integrate Explainable AI techniques to provide users with clear, understandable rationales and data inputs for all AI decisions or recommendations. This is necessary to counteract the "black box" effect and enable users to exercise informed autonomy by comprehending *why* a decision was influenced, thus facilitating appropriate reliance and critical oversight. 3. **Enact AI Governance Principles Centered on Human Autonomy:** Codify the preservation of human agency and self-determination as a non-negotiable ethical cornerstone within the organization's AI governance framework. Establish clear scope limitations and ethical guardrails that define and restrict the extent to which autonomous AI systems may intervene in sensitive personal domains to prevent the systemic erosion of individual choice and control.