Type 2: Bigger than expected
Harm can result from AI that was not expected to have a large impact at all, such as a lab leak, a surprisingly addictive open-source product, or an unexpected repurposing of a research prototype.
ENTITY
2 - AI
INTENT
2 - Unintentional
TIMING
2 - Post-deployment
Risk ID
mit02
Domain lineage
7. AI System Safety, Failures, & Limitations
7.3 > Lack of capability or robustness
Mitigation strategy
1. **Implement Formal Second-Order Consequence Analysis and Iterative Risk Tiering:** Mandate a structured framework to evaluate and document potential second- and subsequent-order impacts (e.g., societal, environmental, ethical) and classify AI systems into risk tiers at defined lifecycle points (pre-training through post-deployment). This proactive approach forces the anticipation of non-obvious, large-scale unintended consequences and ensures mitigations are commensurate with the system's evolving risk profile. 2. **Enforce Strict Access Control and Capability Restriction for Dual-Use Systems:** For AI systems with capabilities in high-risk dual-use domains (e.g., cybersecurity or biological research), implement rigorous access controls and security protocols. Proactively remove or restrict dangerous functionalities from models intended for general or open-source release to prevent unexpected repurposing that could result in catastrophic harm. 3. **Establish Clear Developer Accountability and Liability:** Institute a governance framework that enforces explicit accountability and, where appropriate, a strict liability regime on developers of general-purpose AI systems. This measure is intended to incentivize greater investment in safety-oriented organizational culture, rigorous audits, and robust risk management throughout the entire development lifecycle, thereby reducing the probability of large-scale accidents or unanticipated misuse.
ADDITIONAL EVIDENCE
the scope of actions available to an AI technology can be greatly expanded when the technology is copied many times over, or modified relative to the likely intentions of its initial creators. However, impact on an unexpectedly large s cale can occur even if only one team is responsible for creating the technology