Back to the MIT repository
7. AI System Safety, Failures, & Limitations3 - Other

Homogeneity and correlated failures

Homogeneity and Correlated Failures. The current paradigm driving the state of the art in AI is the ‘foundation model’ (Bommasani et al., 2021): large-scale ML models pre-trained on broad data, which can be repurposed for a wide range of downstream applications. The costs required to create such models (and continuing returns to scale) means that only well-resourced actors can create cutting- edge models (Epoch, 2023; Hoffmann et al., 2022; Kaplan et al., 2020), making them relatively few in number. If current trends continue, it is likely that many AI agents will be powered by a small number of similar underlying models.28

Source: MIT AI Risk Repositorymit1224

ENTITY

3 - Other

INTENT

3 - Other

TIMING

3 - Other

Risk ID

mit1224

Domain lineage

7. AI System Safety, Failures, & Limitations

375 mapped risks

7.6 > Multi-agent risks

Mitigation strategy

1. **Architectural Diversification and Redundancy**: Mandate the deployment of functionally and technologically diverse foundation models across critical applications, utilizing distinct architectures, training datasets, or providers to build resilience against a common-mode failure. This strategy actively counters the homogeneity risk by ensuring that a flaw or bias in one underlying model does not correlate with failures in all others, aligning with principles of network diversity. 2. **Proactive Correlated Failure Detection and Isolation**: Establish real-time, cross-system monitoring protocols specifically designed to identify indicators of correlated outputs, anomalous behavior, or cascading failures across model agents. This must be coupled with clear, pre-defined **human-in-the-loop** protocols that grant operators the immediate authority to override, quarantine, or disengage the affected systems to contain the propagation of a detected failure. 3. **Enhanced Governance and Supply Chain Transparency**: Institute stringent **AI risk management frameworks** that require thorough due diligence and full transparency for all third-party foundation models, including access to documentation detailing training data, known biases, and performance limitations. This governance must enable full **explainability (XAI)** and traceability of AI outputs to the specific underlying model, ensuring accountability and facilitating targeted risk mitigation efforts when homogeneity is unavoidable.