Back to the MIT repository
7. AI System Safety, Failures, & Limitations2 - Post-deployment

Incompetence

This means the AI simply failing in its job. The consequences can vary from unintentional death (a car crash) to an unjust rejection of a loan or job application.

Source: MIT AI Risk Repositorymit85

ENTITY

2 - AI

INTENT

2 - Unintentional

TIMING

2 - Post-deployment

Risk ID

mit85

Domain lineage

7. AI System Safety, Failures, & Limitations

375 mapped risks

7.3 > Lack of capability or robustness

Mitigation strategy

1. Implement advanced technical robustness measures, such as adversarial training and formal verification methods, to ensure consistent model performance and operational resilience against a diverse array of perturbations, errors, and adversarial inputs, aligning with established safety standards. 2. Establish rigorous, continuous testing and validation protocols throughout the AI lifecycle, including stress testing under extreme real-world and adversarial conditions, utilizing diverse data sets, and employing independent vulnerability assessments to preemptively identify and mitigate potential failure modes. 3. Establish a robust Artificial Intelligence Management System (AIMS) and maintain readily accessible audit trails and logs of all AI system behaviors and decisions to ensure traceability, enable post-deployment failure analysis, and facilitate human oversight and accountability in high-stakes decision-making scenarios.