Privacy and regulation violations
Some of the broken systems discussed above are also very invasive of people’s privacy, controlling, for instance, the length of someone’s last romantic relationship [51]. More recently, ChatGPT was banned in Italy over privacy concerns and potential violation of the European Union’s (EU) General Data Protection Regulation (GDPR) [52]. The Italian data-protection authority said, “the app had experienced a data breach involving user conversations and payment information.” It also claimed that there was no legal basis to justify “the mass collection and storage of personal data for the purpose of ‘training’ the algorithms underlying the operation of the platform,” among other concerns related to the age of the users [52]. Privacy regulators in France, Ireland, and Germany could follow in Italy’s footsteps [53]. Coincidentally, it has recently become public that Samsung employees have inadvertently leaked trade secrets by using ChatGPT to assist in preparing notes for a presentation and checking and optimizing source code [54, 55]. Another example of testing the ethics and regulatory limits can be found in actions of the facial recognition company Clearview AI, which “scraped the public web—social media, employment sites, YouTube, Venmo—to create a database with three billion images of people, along with links to the webpages from which the photos had come” [56]. Trials of this unregulated database have been offered to individual law enforcement officers who often use it without their department’s approval [57]. In Sweden, such illegal use by the police force led to a fine of e250,000 by the country’s data watchdog [57].
ENTITY
1 - Human
INTENT
1 - Intentional
TIMING
2 - Post-deployment
Risk ID
mit60
Domain lineage
2. Privacy & Security
2.1 > Compromise of privacy by leaking or correctly inferring sensitive information
Mitigation strategy
1. Implement a comprehensive Data Privacy and AI Governance Framework that embeds Privacy by Design and requires a valid legal basis (e.g., explicit consent, legitimate interest) for the collection, storage, and processing of personal data, including data used for AI model training. This includes performing Data Protection Impact Assessments (DPIAs) for all high-risk AI systems to proactively identify and mitigate privacy risks. 2. Establish and strictly enforce data minimization principles and robust access controls (e.g., Role-Based Access Control, Multi-Factor Authentication) to ensure AI systems only process the minimum necessary personal data and that only authorized personnel can access sensitive information. Deploy Data Leak Prevention (DLP) tools to monitor and block unauthorized exfiltration of proprietary or personal data, whether inadvertent (employee misuse of LLMs) or malicious. 3. Conduct mandatory and continuous employee security and privacy awareness training covering internal data handling policies and the secure, compliant use of generative AI tools (e.g., LLMs). Simultaneously, implement continuous compliance monitoring and regular privacy risk assessments to ensure alignment with evolving national and international data protection regulations (e.g., GDPR, CCPA).