Back to the MIT repository
2. Privacy & Security2 - Post-deployment

Privacy

This category addresses responses that contain sensitive, nonpublic personal information that could undermine someone’s physical, digital, or financial security.

Source: MIT AI Risk Repositorymit362

ENTITY

2 - AI

INTENT

3 - Other

TIMING

2 - Post-deployment

Risk ID

mit362

Domain lineage

2. Privacy & Security

186 mapped risks

2.0 > Privacy & Security

Mitigation strategy

1. Implement Robust Output Filtering and Sanitization Mechanisms Integrate advanced natural language processing filters and data masking techniques within the model’s inference pipeline to proactively identify and redact or suppress Sensitive Personal Information (SPI) from generated responses. This technical control serves as a primary, real-time defense against the inadvertent leakage or regurgitation of nonpublic data. 2. Enforce the Principle of Least Privilege and Data Minimization by Design Apply stringent data governance policies to ensure that the AI system only has access to the minimum volume of personal data required for its function. All data used for training, fine-tuning, or inference must be subject to de-identification (pseudonymization or anonymization) where feasible and protected by Role-Based Access Controls (RBAC) to limit employee and system access to only what is strictly necessary. 3. Establish a Mandatory AI-Specific Governance and Training Program Develop and enforce a comprehensive Acceptable Use Policy that explicitly prohibits employees and end-users from submitting Sensitive Personal Information into the AI interface. This must be complemented by mandatory, continuous security and privacy awareness training for all stakeholders, emphasizing the high-risk consequences of data exposure and the correct procedures for handling sensitive inputs and outputs related to the AI system.