Back to the MIT repository
2. Privacy & Security2 - Post-deployment

Interconnectivity with malicious external tools

The growing integration and interconnectivity with external tools and plugins increase the risk of exposure to malicious external inputs. This interconnectivity makes it easier for external tools to introduce harmful content [220].

Source: MIT AI Risk Repositorymit1162

ENTITY

1 - Human

INTENT

3 - Other

TIMING

2 - Post-deployment

Risk ID

mit1162

Domain lineage

2. Privacy & Security

186 mapped risks

2.2 > AI system security vulnerabilities and attacks

Mitigation strategy

1. Implement stringent input validation and sanitization, coupled with output filtering mechanisms, for all data and code exchanged with external tools or plugins. This proactively defends against adversarial inputs and prevents the introduction of harmful content into the AI system. 2. Enforce granular access control policies based on the principle of least privilege (PoLP) for all interconnected components. External tools must be restricted to the minimum necessary permissions to limit the potential scope and impact of an attack if a component is compromised. 3. Deploy continuous runtime behavioral monitoring and anomaly detection systems at the integration layer. This allows for real-time analysis of API calls, data flows, and model behavior to immediately flag and isolate statistically anomalous activity indicative of a malicious interaction.