Back to the MIT repository
7. AI System Safety, Failures, & Limitations1 - Pre-deployment

Protection

Gaps' that arise across the development process where normal conditions for a complete specification of intended functionality and moral responsibility are not present.

Source: MIT AI Risk Repositorymit639

ENTITY

1 - Human

INTENT

2 - Unintentional

TIMING

1 - Pre-deployment

Risk ID

mit639

Domain lineage

7. AI System Safety, Failures, & Limitations

375 mapped risks

7.3 > Lack of capability or robustness

Mitigation strategy

1. Formal Integration of Ethical Governance Establish and enforce a comprehensive AI Governance Framework *ab initio* in the development process, mandating explicit documentation of the intended ethical principles, societal values, and complete functional specifications before the commencement of coding or system training. This institutionalizes moral responsibility and prevents *ad hoc* implementation. 2. Mandatory Ex-Ante Impact Assessment Require rigorous, multi-dimensional Ethical and Societal Impact Assessments (EIAs/SIAs) during the conceptualization and design phases. These assessments must proactively identify, quantify, and document potential failure modes, unintended consequences, and areas of missing moral responsibility specification, thereby informing necessary design modifications pre-deployment. 3. Cross-Functional Specification Audits and Stakeholder Elicitation Institute formal, cross-functional teams (comprising engineers, ethicists, and legal experts) to conduct internal specification audits, ensuring alignment between technical design and ethical obligations. Furthermore, mandate inclusive engagement with relevant external stakeholders to validate that the specified functionality and moral boundaries are comprehensive and align with broader societal norms and laws.