Back to the MIT repository
2. Privacy & Security2 - Post-deployment

Privacy Violations

EAI systems interact with huge amounts of data, creating significant privacy concerns. These systems are often trained on vast corpora and process a variety of data modalities— spanning visual, auditory, and tactile information—during deployment [12]. Like text-based virtual AI models, which are known to memorize and expose personally identifiable information [75, 76], commercial robots have been shown to disclose proprietary information through simple prompts [61].

Source: MIT AI Risk Repositorymit1426

ENTITY

2 - AI

INTENT

2 - Unintentional

TIMING

2 - Post-deployment

Risk ID

mit1426

Domain lineage

2. Privacy & Security

186 mapped risks

2.1 > Compromise of privacy by leaking or correctly inferring sensitive information

Mitigation strategy

1. Establish robust AI governance and 'Privacy by Design' frameworks. Develop and enforce holistic policies that mandate the integration of privacy leaders and principles across the entire AI system lifecycle, from data collection to deployment, to ensure proactive risk management and compliance with evolving data protection regulations. 2. Implement Data Minimization and Privacy-Enhancing Technologies (PETs). Strictly adhere to the principle of data minimization, collecting only the essential data. For necessary data, utilize technical safeguards such as encryption, anonymization/pseudonymization, differential privacy, and federated learning to mitigate the risk of sensitive information leakage, particularly during model training and inference. 3. Conduct continuous monitoring and independent compliance audits. Systematically assess AI model outputs, training data usage, and adherence to established consent practices via real-time monitoring and regular, independent audits. This ensures prompt detection of potential data breaches or inadvertent exposure of sensitive information post-deployment.