Back to the MIT repository
2. Privacy & Security2 - Post-deployment

Privacy Invasion

AI systems typically depend on extensive data for effective training and functioning, which can pose a risk to privacy if sensitive data is mishandled or used inappropriately

Source: MIT AI Risk Repositorymit469

ENTITY

2 - AI

INTENT

2 - Unintentional

TIMING

2 - Post-deployment

Risk ID

mit469

Domain lineage

2. Privacy & Security

186 mapped risks

2.1 > Compromise of privacy by leaking or correctly inferring sensitive information

Mitigation strategy

1. Implement a rigorous Data Minimization strategy, ensuring that the AI system's design and training lifecycle (Privacy by Design) restrict the collection, processing, and retention of Personally Identifiable Information (PII) strictly to the data necessary and relevant for the defined purpose. 2. Mandate the use of Privacy-Enhancing Technologies (PETs), such as homomorphic encryption and effective data anonymization or pseudonymization techniques, for all sensitive data utilized in the AI model's training and deployment environments. 3. Enforce the Principle of Least Privilege (PoLP) and robust Role-Based Access Controls (RBAC) to restrict system and data access to only essential personnel, complemented by multi-factor authentication (MFA) and continuous access review audits.