Back to the MIT repository
7. AI System Safety, Failures, & Limitations3 - Other

AI Ethics

Ethical challenges are widely discussed in the literature and are at the heart of the debate on how to govern and regulate AI technology in the future (Bostrom & Yudkowsky, 2014; IEEE, 2017; Wirtz et al., 2019). Lin et al. (2008, p. 25) formulate the problem as follows: “there is no clear task specification for general moral behavior, nor is there a single answer to the question of whose morality or what morality should be implemented in AI”. Ethical behavior mostly depends on an underlying value system. When AI systems interact in a public environment and influence citizens, they are expected to respect ethical and social norms and to take responsibility of their actions (IEEE, 2017; Lin et al., 2008).

Source: MIT AI Risk Repositorymit325

ENTITY

3 - Other

INTENT

3 - Other

TIMING

3 - Other

Risk ID

mit325

Domain lineage

7. AI System Safety, Failures, & Limitations

375 mapped risks

7.3 > Lack of capability or robustness

Mitigation strategy

1. Establish a formal AI Ethics and Governance Framework that translates abstract moral and societal values (e.g., fairness, accountability, non-maleficence) into measurable, actionable technical requirements and organizational policies, thereby providing a clear, pre-defined ethical specification for AI system design and deployment. 2. Implement robust Explainable AI (XAI) and transparency mechanisms to convert opaque decision-making processes into auditable and interpretable outputs, enabling human users and oversight bodies to assess the AI's alignment with ethical and social norms. 3. Mandate continuous Human-in-the-Loop (HITL) oversight for all high-stakes decisions, clearly designating human accountability for the AI's actions and ensuring that final authority rests with a human agent capable of exercising judgment over the system's compliance with dynamic ethical and legal requirements.