Back to the MIT repository
7. AI System Safety, Failures, & Limitations2 - Post-deployment

Information Asymmetries

Information asymmetries (Section 3.1): private information can lead to miscoordination, deception, and conflict;

Source: MIT AI Risk Repositorymit1217

ENTITY

2 - AI

INTENT

3 - Other

TIMING

2 - Post-deployment

Risk ID

mit1217

Domain lineage

7. AI System Safety, Failures, & Limitations

375 mapped risks

7.6 > Multi-agent risks

Mitigation strategy

1. Develop and enforce secure, standardized communication protocols for trusted agent interactions to ensure data integrity and verifiable signaling, thereby preventing deceptive communication or agent impersonation within the multi-agent system. 2. Implement comprehensive system monitoring and explainability mechanisms, such as logging agent 'mindsets' and decisions, to audit individual agent knowledge and facilitate the detection and tracing of adverse outcomes arising from information imbalances. 3. Mandate independent third-party auditing and regulatory oversight to verify agent-disclosed information and enforce transparency standards, which serves to mitigate the systemic risks of adverse selection and miscoordination.