Back to the MIT repository
2. Privacy & Security2 - Post-deployment

Loss of privacy

AI offers the temptation to abuse someone's personal data, for instance to build a profile of them to target advertisements more effectively.

Source: MIT AI Risk Repositorymit86

ENTITY

1 - Human

INTENT

1 - Intentional

TIMING

2 - Post-deployment

Risk ID

mit86

Domain lineage

2. Privacy & Security

186 mapped risks

2.1 > Compromise of privacy by leaking or correctly inferring sensitive information

Mitigation strategy

1. Proactively integrate **Privacy-by-Design** principles, mandating the use of advanced privacy-preserving technologies such as **differential privacy** and **federated learning** to minimize data granularity and limit the ability to correctly infer sensitive information for profiling and targeted advertising, complemented by strict **data minimization** and **pseudonymization** techniques. 2. Institute rigorous **Consent and Transparency Protocols** that require **explicit opt-in consent** for any personal data profiling used to generate advertising segments, and ensure **clear, user-friendly disclosure** of the specific data collected, the profiling logic employed, and the resulting uses in accordance with regulatory frameworks (e.g., GDPR). 3. Implement a continuous **AI Governance and Monitoring Framework** to deploy and manage layered security controls, including **Role-Based Access Control (RBAC)**, and utilize **continuous behavioral monitoring** of model outputs and data access logs to detect and flag any unauthorized data extraction or anomalous activity indicative of sensitive information leakage or abuse.