Back to the MIT repository
7. AI System Safety, Failures, & Limitations2 - Post-deployment

Lack of ethical decision-making

AI models and systems that lack moral reasoning capabilities may make decisions that are unethical or harmful.

Source: MIT AI Risk Repositorymit1075

ENTITY

2 - AI

INTENT

2 - Unintentional

TIMING

2 - Post-deployment

Risk ID

mit1075

Domain lineage

7. AI System Safety, Failures, & Limitations

375 mapped risks

7.3 > Lack of capability or robustness

Mitigation strategy

1. Embed ethical-by-design methodologies by incorporating fairness-aware algorithms and explicitly optimizing for pre-defined ethical constraints (e.g., equalized odds, non-discrimination) during model training, utilizing ethically annotated datasets where possible. 2. Establish robust Human-in-the-Loop (HITL) governance with mandatory human oversight for high-consequence decisions, coupled with the creation of interdisciplinary AI Ethics Committees to conduct pre-deployment risk and impact assessments and formally allocate legal and operational accountability. 3. Implement full-lifecycle accountability and interpretability by mandating the use of Explainable AI (XAI) methods (e.g., SHAP, LIME) to justify outputs and deploying immutable audit trails and continuous real-time monitoring systems to detect and flag ethical drift or anomalous, potentially biased decisions post-deployment.