Back to the MIT repository
2. Privacy & Security3 - Other

Secondary use

The use of personal data collected for one purpose for a diferent purpose without end-user consent; AI exacerbates secondary use risks by creating new AI capabilities with collected personal data, and (re)creating models from a public dataset.

Source: MIT AI Risk Repositorymit1366

ENTITY

1 - Human

INTENT

1 - Intentional

TIMING

3 - Other

Risk ID

mit1366

Domain lineage

2. Privacy & Security

186 mapped risks

2.1 > Compromise of privacy by leaking or correctly inferring sensitive information

Mitigation strategy

1. Prioritize Legal and Compatibility Assessment Formally conduct a compatibility assessment between the initial data collection purpose and the proposed secondary use, adhering to regulatory principles such as the GDPR's purpose limitation. If the secondary use is incompatible, a new legal basis must be established, such as obtaining explicit, informed consent from the data subjects. 2. Implement Data Minimization and De-identification Apply robust data minimization techniques to ensure only the strictly necessary and proportionate amount of personal data is used for the secondary purpose. Utilize recognized de-identification or anonymization methods (e.g., HIPAA's Expert Determination or Safe Harbor) to mitigate the risk of re-identification or correct inference of sensitive information. 3. Establish Strict AI Data Governance and Access Controls Deploy fine-grained, role-based access controls (RBAC) to limit which personnel and AI models can access the data for the secondary purpose. Enforce clear data usage restrictions and audit trails for all AI-driven activities, ensuring that training on shared data does not unintentionally expose confidential or identifiable information.