Back to the MIT repository
7. AI System Safety, Failures, & Limitations3 - Other

Capability failures

One reason AI systems fail is because they lack the capability or skill needed to do what they are asked to do.

Source: MIT AI Risk Repositorymit367

ENTITY

2 - AI

INTENT

2 - Unintentional

TIMING

3 - Other

Risk ID

mit367

Domain lineage

7. AI System Safety, Failures, & Limitations

375 mapped risks

7.3 > Lack of capability or robustness

Mitigation strategy

- Implement robust data governance and MLOps protocols, establishing formal validation metrics that assess model generalisation and robustness across the operational envelope to preemptively identify and remediate functional capability shortfalls. - Deploy adversarial testing and red teaming methodologies to systematically map the boundaries of the AI system's functional performance, necessitating the integration of dynamic human-in-the-loop decision protocols for scenarios approaching identified capability limits. - Mandate the development of advanced AI literacy and technical competency across development, governance, and executive oversight functions to ensure realistic expectation setting and informed engagement with the system's inherent and residual capability gaps.