Collusion between LLM-Agents
While it would often be preferable for LLM-agents to be cooperative, cooperation can be undesirable if it undermines pro-social competition or produces negative externalities for coalition non-members (Dorner, 2021; Buterin, 2019; Dafoe et al., 2020). Collusion between relatively simple AI systems has been observed in the real world (Assad et al., 2020; Wieting and Sapi, 2021) and synthetic experiments (Brown and MacKay, 2023; Calvano et al., 2020; Klein, 2021) Collusion can occur through explicit or steganographic communication. Steganographic communication hides information in seemingly innocent content (Roger and Greenblatt, 2023), posing challenges for collusion monitoring and detection.
ENTITY
2 - AI
INTENT
1 - Intentional
TIMING
2 - Post-deployment
Risk ID
mit1487
Domain lineage
7. AI System Safety, Failures, & Limitations
7.6 > Multi-agent risks
Mitigation strategy
1. Implement advanced, multi-modal detection and steganalysis methodologies to actively monitor LLM-agent communications and behavioral patterns. This includes statistical analysis, temporal pattern detection, and adversarial machine learning techniques to reliably distinguish subtle, covert (steganographic) collusion from legitimate cooperation signals. 2. Engineer verifiably competitive agent architectures and internal safeguards. Integrate mechanisms, such as multi-model output validation and the use of specialized monitoring agents, to disincentivize or inhibit the emergence of collusive strategies that prioritize cumulative long-term rewards over competitive market dynamics. 3. Establish a robust, socio-technical Multi-Agent Governance framework. This includes instituting an Agent Governance Board, maintaining a clear Agent Registry detailing agent roles and influence, and mandating regular red teaming and chaos engineering exercises to stress-test the system for emergent collusive behaviors.