Back to the MIT repository
7. AI System Safety, Failures, & Limitations2 - Post-deployment

Type 3: Worse than expected

AI intended to have a large societal impact can turn out harmful by mistake, such as a popular product that creates problems and partially solves them only for its users.

Source: MIT AI Risk Repositorymit03

ENTITY

2 - AI

INTENT

2 - Unintentional

TIMING

2 - Post-deployment

Risk ID

mit03

Domain lineage

7. AI System Safety, Failures, & Limitations

375 mapped risks

7.3 > Lack of capability or robustness

Mitigation strategy

1. **Implement a Continuous AI Model Robustness and Performance Monitoring Framework** Mandate rigorous stress testing against adversarial and out-of-distribution inputs, coupled with real-time monitoring of deployed systems to identify and flag performance drift, systemic anomalies, and unintended consequences that compromise safety or reliability. 2. **Establish Comprehensive Human Oversight and Accountability Protocols** Define clear human-in-the-loop checkpoints and intervention procedures, ensuring designated personnel possess the requisite competence and authority to safely override or shut down the system when unintended harmful behavior is identified, and maintain detailed logs for retrospective analysis. 3. **Integrate Systemic Risk Assessment into the AI Development Lifecycle** Require pre-deployment risk reports that explicitly analyze potential large-scale, secondary effects (e.g., monoculture, procyclicality, or unpredictable interdependencies) arising from mass adoption, and design technical safeguards to prevent the acceleration of systemic complexity.

ADDITIONAL EVIDENCE

Oftentimes, the whole point of producing a new AI technology is to produce a large (usually positive) impact on society. Therefore, a major category of societal-scale risk arises from large, well-intentioned interventions that go wrong.