Back to the MIT repository
7. AI System Safety, Failures, & Limitations1 - Pre-deployment

Poor model design choices

The model specifications have significant impact on the functionality of an AI system. The developer mak- ing wrong decisions might cause the AI system to behave biased and unreliable.

Source: MIT AI Risk Repositorymit1008

ENTITY

1 - Human

INTENT

2 - Unintentional

TIMING

1 - Pre-deployment

Risk ID

mit1008

Domain lineage

7. AI System Safety, Failures, & Limitations

375 mapped risks

7.3 > Lack of capability or robustness

Mitigation strategy

1. Mandate comprehensive Model Governance and Validation Frameworks. Implement mandatory independent validation during the pre-deployment phase, specifically requiring a **Conceptual Soundness Check** and **Design Risk Analysis** (such as dFMEA) to verify that the model architecture and specifications adhere to industry best practices, align with the system's intended purpose, and integrate robustness and capability constraints by design. 2. Integrate Fairness and Ethical Design Constraints. Enforce the selection and configuration of **fairness-aware algorithms** during model development. This includes leveraging techniques like fair representation learning, adversarial debiasing, or the application of explicit **fairness constraints** in the loss function (fair regularization) to proactively mitigate algorithmic bias that may stem from poor design choices. 3. Establish Rigorous Documentation and Accountability Protocols. Require the creation of detailed model documentation that explicitly captures every design decision, underlying assumption, and potential limitation of the model (e.g., a Model Card). This measure, coupled with **Targeted Training** for model developers and risk managers on established modeling best practices, ensures a traceable audit trail and clear accountability for design choices.