Back to the MIT repository
2. Privacy & Security3 - Other

Risks to privacy

General- purpose AI systems can cause or contribute to violations of user privacy. Violations can occur inadvertently during the training or usage of AI systems, for example through unauthorised processing of personal data or leaking health records used in training. But violations can also happen deliberately through the use of general- purpose AI by malicious actors; for example, if they use AI to infer private facts or violate security.

Source: MIT AI Risk Repositorymit1031

ENTITY

2 - AI

INTENT

2 - Unintentional

TIMING

3 - Other

Risk ID

mit1031

Domain lineage

2. Privacy & Security

186 mapped risks

2.1 > Compromise of privacy by leaking or correctly inferring sensitive information

Mitigation strategy

1. Implement data minimization and advanced privacy-enhancing technologies (PETs) across the entire AI lifecycle, mandating the use of differential privacy, k-anonymity, and pseudonymization/data masking for all sensitive training, validation, and inference datasets to preclude unauthorized data extraction or re-identification. 2. Establish a Zero Trust security architecture for the AI data pipeline and model infrastructure, enforcing strict Role-Based Access Control (RBAC), Multi-Factor Authentication (MFA), and robust end-to-end encryption (for data at rest and in transit) to prevent unauthorized access and control the flow of sensitive information. 3. Institute a continuous governance program encompassing regular security audits, real-time monitoring of data access and model outputs for anomalies, and routine adversarial testing (e.g., red teaming) to proactively identify and mitigate vulnerabilities exploitable by malicious actors to infer private facts or compromise data security.