Back to the MIT repository
2. Privacy & Security3 - Other

Risks from data (Risks of data leakage)

In AI research, development, and applications, issues such as improper data processing, unauthorized access, malicious attacks, and deceptive interactions can lead to data and personal information leaks.

Source: MIT AI Risk Repositorymit690

ENTITY

1 - Human

INTENT

3 - Other

TIMING

3 - Other

Risk ID

mit690

Domain lineage

2. Privacy & Security

186 mapped risks

2.1 > Compromise of privacy by leaking or correctly inferring sensitive information

Mitigation strategy

1. Establish a formal AI governance and policy framework, accompanied by mandatory, continuous employee training, to explicitly define acceptable use, classify sensitive data, and mandate adherence to a Zero Trust architecture for all AI systems and data repositories. 2. Deploy advanced Data Loss Prevention (DLP) solutions with real-time prompt and input scanning capabilities to automatically detect and block the transmission of sensitive, proprietary, or personally identifiable information (PII) to both sanctioned and unsanctioned AI models. 3. Implement data protection-by-design techniques, including data minimization, robust data anonymization/pseudonymization, and differential privacy, to significantly reduce the quantity and identifiability of sensitive information within the AI training and inference data pipelines.