Generative AI Outputs
Generative AI tools may inadvertently share personal information about someone or someone’s business or may include an element of a person from a photo. Particularly, companies concerned about their trade secrets being integrated into the model from their employees have explicitly banned their employees from using it.
ENTITY
2 - AI
INTENT
2 - Unintentional
TIMING
2 - Post-deployment
Risk ID
mit523
Domain lineage
2. Privacy & Security
2.1 > Compromise of privacy by leaking or correctly inferring sensitive information
Mitigation strategy
1. **Establish and Enforce a Comprehensive Generative AI Use Policy** Develop and rigorously enforce a clear policy prohibiting employees from inputting sensitive, proprietary, or confidential organizational data (e.g., intellectual property, trade secrets, PII) into all public or unauthorized Generative AI tools. This policy must define acceptable use cases, mandate human oversight of all AI-generated content, and align with all relevant data governance and regulatory frameworks (e.g., GDPR, HIPAA). 2. **Implement Context-Aware AI Data Loss Prevention (DLP) Guardrails** Deploy specialized, context-aware Data Loss Prevention (DLP) systems at the network and endpoint layers. These systems must be configured with technical guardrails to proactively inspect user prompts for sensitive data patterns and block their submission to external AI models, thereby preventing inference-layer leakage and unauthorized data sharing in real-time. 3. **Utilize Secure and Isolated Enterprise AI Environments** For all critical workflows involving sensitive or proprietary data, mandate the use of secure, isolated (private) Generative AI instances hosted within the enterprise's security perimeter. This architectural control prevents the AI vendor from using internal inputs for public model training and ensures complete control over data lineage, access, and retention.