Disclosure
Revealing and improperly sharing data of individuals; AI creates new types of disclosure risks by inferring additional information beyond what is explicitly captured in the raw data; AI exacerbates disclosure risks through sharing personal data to train models.
ENTITY
2 - AI
INTENT
2 - Unintentional
TIMING
2 - Post-deployment
Risk ID
mit1365
Domain lineage
2. Privacy & Security
2.1 > Compromise of privacy by leaking or correctly inferring sensitive information
Mitigation strategy
1. Rigorous implementation of privacy-enhancing technologies (PETs) such as differential privacy, homomorphic encryption, and advanced tokenization/data sanitization to prevent the direct inclusion or algorithmic reconstruction of sensitive personal or proprietary information during model training and inference. 2. Enforce a principle of least privilege through Role-Based Access Controls (RBAC) and adopt a Zero-Trust security architecture, which includes restricting model access to external data sources and tightly managing system prompts and inference outputs to limit the scope of information returned. 3. Establish comprehensive AI governance policies that mandate continuous user education on safe LLM interaction, require transparent data usage practices with explicit opt-out mechanisms for training data inclusion, and enforce the exclusive use of enterprise-sanctioned, secure AI platforms for all confidential workflows.