Back to the MIT repository
2. Privacy & Security2 - Post-deployment

Privacy compromise

Privacy Compromise attacks reveal sensitive or private information that was used to train a model. For example, personally identifiable information or medical records.

Source: MIT AI Risk Repositorymit1270

ENTITY

1 - Human

INTENT

1 - Intentional

TIMING

2 - Post-deployment

Risk ID

mit1270

Domain lineage

2. Privacy & Security

186 mapped risks

2.2 > AI system security vulnerabilities and attacks

Mitigation strategy

1. Prioritize and enforce data minimization principles by utilizing data anonymization, pseudonymization, and synthetic data generation techniques (e.g., differential privacy, data masking, and PII stripping) to exclude sensitive or personally identifiable information from both model training data and live user prompts. 2. Implement stringent access control policies, such as Role-Based Access Control (RBAC) and the principle of least privilege, around the AI model, its data stores, and all model interaction channels (including APIs and internal platforms) to prevent unauthorized input or access to sensitive data. 3. Deploy continuous, real-time monitoring solutions (e.g., AI Artifact Scanning or a Generative AI Firewall) to inspect and filter all model inputs and outputs for signs of sensitive data leakage, adversarial prompt patterns, and unauthorized data exfiltration post-deployment.