Back to the MIT repository
2. Privacy & Security2 - Post-deployment

Cyberspace risks (Risks of information leakage due to improper usage)

Staff of government agencies and enterprises, if failing to use the AI service in a regulated and proper manner, may input internal data and industrial information into the AI model, leading to the leakage of work secrets, business secrets, and other sensitive business data.

Source: MIT AI Risk Repositorymit696

ENTITY

1 - Human

INTENT

2 - Unintentional

TIMING

2 - Post-deployment

Risk ID

mit696

Domain lineage

2. Privacy & Security

186 mapped risks

2.1 > Compromise of privacy by leaking or correctly inferring sensitive information

Mitigation strategy

1. Establish a Comprehensive AI Acceptable Use Policy (AI AUP) Formally define and implement an organizational AI Acceptable Use Policy (AUP) that explicitly delineates what constitutes confidential or restricted information and strictly prohibits its submission to unapproved or public AI services. This foundational governance artifact must mandate the exclusive use of enterprise-grade AI solutions that contractually guarantee that customer input data will not be retained or used for model training, thereby establishing a clear legal and operational boundary for data handling. 2. Deploy Real-time Data Loss Prevention (DLP) with Prompt Protection Integrate advanced Data Loss Prevention (DLP) solutions equipped with AI prompt protection capabilities into the organizational cyber-infrastructure. These technical controls must perform real-time, context-aware analysis of all user inputs—including both text and file uploads—to detect and automatically block, redact, or sanitize sensitive data (e.g., PII, proprietary code, financial records) before it is transmitted to the AI platform, serving as a critical safeguard against human error-induced leakage. 3. Conduct Mandatory and Continuous Security Training Implement a mandatory, role-specific AI security training program for all employees. This program should move beyond passive compliance lectures to feature interactive workshops and real-world simulations that educate staff on the mechanics of data leakage, the secure use of approved AI tools, the process of de-identifying confidential information for analysis, and the non-negotiable adherence to the established AI AUP, thereby fostering a proactive culture of security mindfulness.