Back to the MIT repository
2. Privacy & Security2 - Post-deployment

Data Privacy

Impacts due to leakage and unauthorized use, disclosure, or de-anonymization of biometric, health, location, or other personally identifiable information or sensitive data.

Source: MIT AI Risk Repositorymit759

ENTITY

2 - AI

INTENT

2 - Unintentional

TIMING

2 - Post-deployment

Risk ID

mit759

Domain lineage

2. Privacy & Security

186 mapped risks

2.1 > Compromise of privacy by leaking or correctly inferring sensitive information

Mitigation strategy

1. Prioritize Layered Technical Controls Mandate the implementation of advanced, industry-standard **encryption** (e.g., AES-256) for all sensitive data *at rest* and *in transit*. Enforce the **Principle of Least Privilege** via strict **Role-Based Access Controls (RBAC)** and require **Multi-Factor Authentication (MFA)** for all system access, thereby establishing a strong technical perimeter against unauthorized access and disclosure. 2. Mitigate Re-identification Risk Apply robust **de-identification** techniques, such as **pseudonymization** (e.g., tokenization with separate, secured key storage) or **anonymization** (e.g., k-anonymity, aggregation/generalization), before processing data for research, analytics, or disclosure. This must be coupled with strict adherence to the **data minimization** principle to reduce the overall volume of sensitive PII collected and retained. 3. Establish Proactive Governance and Oversight Institute a continuous monitoring and auditing framework, utilizing tools like **Security Information and Event Management (SIEM)** and identity analytics, to detect and flag anomalous user access patterns or unauthorized data transfer attempts (Anti-Data Exfiltration/ADX technology) in real-time. This technical control must be complemented by mandatory, recurring **privacy and security awareness training** for all staff to address human error and foster a culture of security mindfulness.