Back to the MIT repository
7. AI System Safety, Failures, & Limitations3 - Other

Multi-Agent Security

Multi-agent security (Section 3.7): multi-agent systems give rise to new kinds of security threats and vulnerabilities.

Source: MIT AI Risk Repositorymit1242

ENTITY

3 - Other

INTENT

3 - Other

TIMING

3 - Other

Risk ID

mit1242

Domain lineage

7. AI System Safety, Failures, & Limitations

375 mapped risks

7.6 > Multi-agent risks

Mitigation strategy

1. **Implement Granular Cryptographic Identity and Access Controls** * Require robust cryptographic identity verification for all agents. * Enforce granular Role-Based Access Control (RBAC) and Attribute-Based Access Control (ABAC) to adhere to the principle of least privilege, restricting agent permissions to the minimum necessary for task completion. * Utilize short-lived tokens and periodic review of assigned roles to mitigate the persistent risk associated with credential compromise. 2. **Secure Inter-Agent Communication and Establish Dynamic Trust Mechanisms** * Mandate message authentication and end-to-end encryption for all inter-agent communication channels to prevent poisoning and unauthorized interception. * Integrate dynamic trust models that assess the real-time reliability and reputation of peer agents based on behavioral patterns and quality of past interactions, thereby enabling agents to autonomously mitigate risk from potentially rogue or compromised collaborators. 3. **Establish Architectural Isolation and Impact Limitation** * Implement strict network segmentation and agent compartmentalization to reduce interconnectedness and contain the lateral movement of a breach within a defined system boundary. * Set explicit, resource-based boundaries for agent permissions and failure containment zones, ensuring that a compromise in one component cannot escalate to a system-wide opening or catastrophic loss.