Privacy violations
Privacy violation occurs when algorithmic systems diminish privacy, such as enabling the undesirable flow of private information [180], instilling the feeling of being watched or surveilled [181], and the collection of data without explicit and informed consent... privacy violations may arise from algorithmic systems making predictive inference beyond what users openly disclose [222] or when data collected and algorithmic inferences made about people in one context is applied to another without the person’s knowledge or consent through big data flows
ENTITY
2 - AI
INTENT
3 - Other
TIMING
2 - Post-deployment
Risk ID
mit151
Domain lineage
2. Privacy & Security
2.1 > Compromise of privacy by leaking or correctly inferring sensitive information
Mitigation strategy
1. Implement Data Minimization and Privacy-by-Design Principles Proactively enforce the principle of data minimization, ensuring that only the minimum volume of personal data strictly necessary and proportionate for the specified, explicit, and legitimate purpose is collected, processed, and retained. This foundational requirement must be embedded by design throughout the entire AI system lifecycle, thereby constraining the initial dataset's informational capacity for unintended or non-consensual sensitive attribute inference (Source 11, 13). 2. Employ Advanced Data Protection and Inference-Time Privacy Techniques Utilize technical controls such as robust anonymization, pseudonymization, and strong encryption for personal data both in storage and transmission. Furthermore, deploy advanced privacy-enhancing technologies like Differential Privacy (DP) during model training and/or implement inference-time privacy notions (e.g., Robust Privacy) to ensure model outputs do not serve as a side channel for accurately reconstructing or inferring sensitive attributes about individuals or their related proxies (Source 9, 15, 19). 3. Establish a Robust Governance and Compliance Framework for Algorithmic Inferences Institute a formal governance structure that mandates ongoing security audits, vulnerability assessments, and regular staff training on data protection obligations. Crucially, this framework must explicitly govern the production and application of predictive models, requiring a documented legal basis and transparent declaration of purpose for all algorithmic inferences, especially those concerning sensitive or newly inferred personal information (Source 16, 17, 18).
ADDITIONAL EVIDENCE
[Shopping] analytics had correctly inferred what he had not known, that his daughter was pregnant.