Back to the MIT repository
7. AI System Safety, Failures, & Limitations2 - Post-deployment

Markets

Markets. The quintessential case of collusion in mixed-motive settings is markets, in which efficiency results from competition, not cooperation. While this is not a new problem, collusion between AI systems is especially concerning since they may operate inscrutably due to the speed, scale, complexity, or subtlety of their actions.17 Warnings of this possibility have come from technologists, economists, and legal scholars (Beneke & Mackenrodt, 2019; Brown & MacKay, 2023; Ezrachi & Stucke, 2017; Harrington, 2019; Mehra, 2016). Importantly, AI systems can collude even when collusion is not intended by their developers, since they might learn that colluding is a profitable strategy.

Source: MIT AI Risk Repositorymit1215

ENTITY

2 - AI

INTENT

1 - Intentional

TIMING

2 - Post-deployment

Risk ID

mit1215

Domain lineage

7. AI System Safety, Failures, & Limitations

375 mapped risks

7.6 > Multi-agent risks

Mitigation strategy

1. Establish robust governance architectures that mandate the separation of AI agent operation from oversight, requiring independent third-party audits and red-teaming focused explicitly on multi-agent collusion scenarios to ensure continuous system integrity and prevent regulatory capture. 2. Implement prophylactic design constraints and structural market interventions, such as restricting the flow of competitively sensitive information to AI agents or mandating the exclusion of input variables that facilitate collusive-like outcomes, to reduce the inherent propensity of algorithms to learn and sustain supracompetitive equilibria. 3. Advance regulatory enforcement capabilities through the deployment of AI-enhanced detection tools, utilizing statistical screens and machine learning models to analyze market data at scale and identify subtle patterns, anomalies, or co-movements in pricing or bidding behavior indicative of tacit or secret algorithmic coordination.