Back to the MIT repository
2. Privacy & Security2 - Post-deployment

Decision-making on inferred private data

Current GPAIs (LLMs and multimodal LLM-based models) have significant capability to infer correlations in text data. In some cases, they may be able to make highly accurate data inferences on users based on contextual input that users provide [134]. These data inferences can “leak” or reveal sensitive information about the user, cause unfair treatment, or enable manipulation of user behavior.

Source: MIT AI Risk Repositorymit1204

ENTITY

2 - AI

INTENT

3 - Other

TIMING

2 - Post-deployment

Risk ID

mit1204

Domain lineage

2. Privacy & Security

186 mapped risks

2.1 > Compromise of privacy by leaking or correctly inferring sensitive information

Mitigation strategy

- Implement a rigorous data minimization and privacy-by-design framework across the AI lifecycle, prioritizing the use of synthetic data and applying state-of-the-art anonymization or pseudonymization techniques (e.g., Differential Privacy) to training datasets to restrict the model's capacity to memorize or accurately infer sensitive personal information. - Conduct continuous and targeted risk assessments, including Data Protection Impact Assessments (DPIAs) and pre-deployment "red-teaming" for privacy leakage, to empirically measure the model's susceptibility to Inference-Based Privacy (IBP) attacks and ensure robust cybersecurity protections against data exposure. - Enforce governance measures that treat algorithmic inferences of sensitive attributes as protected data, applying obligations of deletion, disclosure, and purpose limitation. Additionally, deploy and assess bias mitigation algorithms, such as the disparate impact remover, to mitigate unfair treatment resulting from decision-making based on these inferred attributes.