Back to the MIT repository
2. Privacy & Security2 - Post-deployment

Privacy infringement

Leaking, generating, or correctly inferring private and personal information about individuals

Source: MIT AI Risk Repositorymit267

ENTITY

2 - AI

INTENT

3 - Other

TIMING

2 - Post-deployment

Risk ID

mit267

Domain lineage

2. Privacy & Security

186 mapped risks

2.1 > Compromise of privacy by leaking or correctly inferring sensitive information

Mitigation strategy

1. Proactively embed Privacy by Design (PbD) and Data Minimization principles throughout the AI system lifecycle, ensuring sensitive data is anonymized or de-identified, and that only the minimum necessary personal information is collected, processed, and retained. 2. Establish a comprehensive AI Governance and Compliance Framework that includes mandatory, regular Privacy Impact Assessments (PIAs) and risk assessments to identify, evaluate, and mitigate privacy threats, ensuring strict adherence to global data protection regulations. 3. Implement advanced security architectures and technical controls to safeguard all personal data, including secure storage, strict access control, and continuous monitoring mechanisms to detect and prevent unauthorized data leakage or compromise via the AI's output or internal processes.

ADDITIONAL EVIDENCE

Example: Leaking a person’s payment address and credit card information (Metz, 2023)*