Privacy Harms
These harms relate to violations of an individual’s or group’s moral or legal right to privacy. Such harms may be exacerbated by assistants that influence users to disclose personal information or private information that pertains to others. Resultant harms might include identity theft, or stigmatisation and discrimination based on individual or group characteristics. This could have a detrimental impact, particularly on marginalised communities. Furthermore, in principle, state-owned AI assistants could employ manipulation or deception to extract private information for surveillance purposes.
ENTITY
2 - AI
INTENT
3 - Other
TIMING
2 - Post-deployment
Risk ID
mit393
Domain lineage
2. Privacy & Security
2.1 > Compromise of privacy by leaking or correctly inferring sensitive information
Mitigation strategy
1. Embed Privacy by Design (PbD) and Conduct Data Protection Impact Assessments (DPIAs): Proactively integrate privacy safeguards throughout the entire AI system lifecycle, including the initial design and development phases. Regularly conduct DPIAs to identify and apply appropriate safeguards to mitigate perceived risks before system deployment. 2. Implement Data Minimization, Anonymization, and Encryption: Adhere strictly to the principle of data minimization by collecting and processing only the essential amount of personal data. Utilize advanced anonymization, pseudonymization, and encryption techniques to protect sensitive information during training, storage, and transmission, reducing the risk of inference or leakage. 3. Ensure Transparent Data Collection and Obtain Valid Consent: Provide individuals with clear and comprehensive information regarding how their data is collected, stored, and specifically utilized by AI systems. Establish a valid legal basis for all personal data processing, ensuring individuals' consent is fully informed and freely given, particularly concerning secondary or repurposed use of data.