Back to the MIT repository
7. AI System Safety, Failures, & Limitations2 - Post-deployment

Collusion

Collusion has long been a topic of intense study in economics, law, and politics, among other disciplines. While there is no universal definition of collusion, it generally refers to secretive cooperation between two or more parties at the expense of one or more other parties. Most classic examples of collusion – such as firms working together to set supra-competitive prices at the expense of consumers – also tend to be not only secretive but in violation of some law, rule, or ethical standard. Distinctions are also commonly made between explicit and tacit collusion (Rees, 1993), depending on whether the colluding parties communicate with each other.

Source: MIT AI Risk Repositorymit1214

ENTITY

2 - AI

INTENT

1 - Intentional

TIMING

2 - Post-deployment

Risk ID

mit1214

Domain lineage

7. AI System Safety, Failures, & Limitations

375 mapped risks

7.6 > Multi-agent risks

Mitigation strategy

1. Establish Comprehensive, Architecturally Separated Oversight and Telemetry Implement robust AI governance frameworks (e.g., NIST AI RMF, ISO/IEC 42001) that mandate architectural separation between agent deployment, operation, and regulatory oversight components. A fundamental requirement is a 'telemetry-first' system design that ensures immutable audit trails by logging all collusion-relevant data, including inter-agent communication (prompts, responses, metadata), action traces (bids, prices), and memory operations, enabling continuous anomaly detection. 2. Conduct Adversarial Red Teaming and Develop Specialized Detection Screens Perform regular, rigorous Red Teaming exercises explicitly designed to probe for multi-agent vulnerabilities, including emergent collusion, inter-agent trust exploitation, and the use of steganographic communication methods. Concurrently, develop and integrate advanced statistical and econometric 'screens' that monitor for atypical patterns of coordinated behavior, such as improbable co-movements in pricing or non-price dimensions (e.g., product ratings), to detect both explicit and tacit algorithmic collusion. 3. Restrict Information Exchange and Implement Market-Design Interventions Limit the information available to competing agents by enforcing strict Role-Based Access Controls (RBAC) and implementing restricted communication protocols to prevent the sharing of competitively sensitive data. From a systemic perspective, policymakers and operators should avoid releasing data that could be used to predict competitors' strategies, while simultaneously designing markets to reduce barriers to entry and promote highly responsive consumer demand, thereby weakening the structural conditions that sustain collusive equilibria.