Leakage
The chatbot reveals sensitive or confidential information.
ENTITY
2 - AI
INTENT
2 - Unintentional
TIMING
3 - Other
Risk ID
mit1408
Domain lineage
2. Privacy & Security
2.1 > Compromise of privacy by leaking or correctly inferring sensitive information
Mitigation strategy
1. **Implement Independent Data Guardrails and Sanitization:** Deploy security controls independent of the Large Language Model (LLM) to enforce strict input and output sanitization. This involves filtering and redacting all Personally Identifiable Information (PII) and proprietary data from user prompts before processing, and similarly validating and stripping sensitive data from the model's responses before delivery to the user. 2. **Enforce Least Privilege and System Separation:** Strictly apply the principle of least privilege to the chatbot's execution environment and linked services (OWASP LLM06). Ensure that the LLM's system prompt and configuration files are isolated from sensitive operational data, credentials (e.g., API keys, connection strings), and internal tool access, preventing both direct and indirect prompt injection-based information leakage (OWASP LLM07). 3. **Mandate End-to-End Encryption and Robust Authentication:** Utilize end-to-end encryption (E2EE) for all data in transit between the user interface and the backend processing system. Additionally, implement multi-factor authentication (MFA) and granular authorization for all administrative and internal API access points used by the chatbot to ensure that only authenticated and authorized components can retrieve or interact with sensitive data stores.