Back to the MIT repository
7. AI System Safety, Failures, & Limitations1 - Pre-deployment

Procedural

The third class encompasses procedural AI hazards. These pertain to issues arising from processes and actions made by individuals involved in the develop- ment process. Such hazards are not readily quantifiable and necessitate alter- native mitigation strategies. An example of such an AI hazard would be ”poor model design choices,” which could be expressed, for instance, through a devel- oper’s decision to select an unsuitable AI model for a given problem. Due to the challenges in quantifying and mitigating these issues, qualitative approaches must be employed. In the case of the aforementioned example, a potential strat- egy might involve requiring the AI developer to provide a documented rationale for their choice.

Source: MIT AI Risk Repositorymit1018

ENTITY

1 - Human

INTENT

2 - Unintentional

TIMING

1 - Pre-deployment

Risk ID

mit1018

Domain lineage

7. AI System Safety, Failures, & Limitations

375 mapped risks

7.3 > Lack of capability or robustness

Mitigation strategy

1. Institute a mandatory AI Model Governance Gateway requiring a formal, documented rationale for model selection, including a comprehensive pre-development risk assessment that justifies the choice of architecture and training data against the target problem domain requirements and foreseeable failure modes. 2. Establish an Interdisciplinary Review Mechanism, such as a Safety and Suitability Board, comprised of technical experts, domain specialists, and ethicists, to provide independent, qualitative oversight and approval of the Model Design Rationale prior to system implementation. 3. Implement Targeted Pre-deployment Validation Protocols—such as adversarial robustness testing and red-teaming—to empirically verify the chosen model's fitness for purpose and resilience against operational and misuse scenarios, thereby mitigating the impact of any suboptimal initial design choices.