Back to the MIT repository
2. Privacy & Security3 - Other

Exclusion

The failure to provide end-users with notice and control over how their data is being used; AI exacerbates exclusion risks by training on rich personal data without consent.

Source: MIT AI Risk Repositorymit1363

ENTITY

2 - AI

INTENT

3 - Other

TIMING

3 - Other

Risk ID

mit1363

Domain lineage

2. Privacy & Security

186 mapped risks

2.1 > Compromise of privacy by leaking or correctly inferring sensitive information

Mitigation strategy

1. Establish Explicit, Granular, and Informed Consent Protocols Implement mechanisms to secure clear, affirmative, and specific consent from data subjects for the use of their personal data in AI model training. This must be coupled with rigorous transparency, providing end-users with easily understandable information regarding the data collected, the precise purpose of processing, and the unconditional right to withdraw consent, thereby directly addressing the failure to provide notice and control. 2. Apply Data Minimization and Privacy-Enhancing Technologies (PETs) Strictly adhere to the principle of data minimization, ensuring that only the minimum amount of personal data necessary for a defined task is collected and processed. Utilize technical safeguards, including robust encryption, anonymization, and pseudonymization techniques, on all training and operational datasets to significantly mitigate the risk of compromise, inference, or inadvertent leaking of sensitive information. 3. Implement a Comprehensive AI Governance Framework with Auditing Develop and enforce an end-to-end AI Governance Framework that mandates the embedding of Privacy by Design principles throughout the entire AI lifecycle. Conduct regular Privacy Impact Assessments (PIAs) on all AI projects and institute continuous, real-time monitoring and auditing of model inputs and outputs to proactively detect, flag, and prevent sensitive data leaks and ensure ongoing compliance with data protection regulations.