Data Security Risk
Just as every other type of individual and organization has explored possible use cases for generative AI products, so too have malicious actors. This could take the form of facilitating or scaling up existing threat methods, for example drafting actual malware code,87 business email compromise attempts,88 and phishing attempts.89 This could also take the form of new types of threat methods, for example mining information fed into the AI’s learning model dataset90 or poisoning the learning model data set with strategically bad data.91 We should also expect that there will be new attack vectors that we have not even conceived of yet made possible or made more broadly accessible by generative AI.
ENTITY
1 - Human
INTENT
1 - Intentional
TIMING
3 - Other
Risk ID
mit524
Domain lineage
4. Malicious Actors & Misuse
4.3 > Fraud, scams, and targeted manipulation
Mitigation strategy
1. Prioritize Model Integrity through Data Governance: Institute a comprehensive data sanitization and validation pipeline for all pre-training, fine-tuning, and inference datasets, employing statistical outlier and anomaly detection algorithms to prevent and mitigate data and model poisoning attacks. 2. Establish Granular Access Control and Least Privilege: Enforce the Principle of Least Privilege (PoLP) across all computational agents and human users to restrict access rights to the minimal level necessary for function execution, thereby significantly reducing the attack surface for unauthorized data mining and internal misuse. 3. Implement Advanced Adversarial Defense Mechanisms: Deploy layered input and output validation filters, including context-aware validation models, to detect and neutralize complex adversarial exploits such as indirect prompt injection, thus safeguarding against the generation of malware or harmful content.