Back to the MIT repository
7. AI System Safety, Failures, & Limitations2 - Post-deployment

Application

This is the risk posed by the intended application or use case. It is intuitive that some use cases will be inherently riskier than others (e.g., an autonomous weapons system vs. a customer service chatbot).

Source: MIT AI Risk Repositorymit188

ENTITY

1 - Human

INTENT

1 - Intentional

TIMING

2 - Post-deployment

Risk ID

mit188

Domain lineage

7. AI System Safety, Failures, & Limitations

375 mapped risks

7.0 > AI system safety, failures, & limitations

Mitigation strategy

1. Prioritize and perform a rigorous AI Risk Classification of the intended application (e.g., as per the EU AI Act framework) to determine its inherent risk level (Unacceptable, High, Limited). Implement a policy of Risk Avoidance by prohibiting or imposing moratoriums on applications falling into the Unacceptable Risk category (e.g., autonomous weapons systems, social scoring). 2. For High-Risk applications, enforce a Risk Reduction strategy by mandating stringent Human Oversight (e.g., human-in-the-loop for critical decisions, effective human-machine interface) and robust technical controls across the entire system lifecycle, including verifiable data quality, system robustness, and detailed technical documentation to ensure safety and transparency. 3. Implement an iterative Risk Management System to continuously monitor the application's performance, context of use, and potential for purpose drift post-deployment. This includes a Post-Market Monitoring plan to detect and report serious incidents, new risks, or degradation in performance or compliance immediately to the relevant authorities.