Back to the MIT repository
4. Malicious Actors & Misuse2 - Post-deployment

Malicious use and abuse (cybercrime)

The advanced capabilities and widespread availability of generative AI models make it possible for malicious actors to conduct harmful activities with great efficiency and on a large scale, simultaneously reducing their operational costs. Cybercriminals can “jailbreak” AI tools to generate sensitive and harmful content. They can also exploit generative AI models to create content that is persuasive and tailored to a targeted individual.

Source: MIT AI Risk Repositorymit729

ENTITY

1 - Human

INTENT

1 - Intentional

TIMING

2 - Post-deployment

Risk ID

mit729

Domain lineage

4. Malicious Actors & Misuse

223 mapped risks

4.3 > Fraud, scams, and targeted manipulation

Mitigation strategy

1. Implement a Zero Trust architecture across the GenAI environment, requiring continuous authentication, least privilege access to models and training data, and real-time monitoring of all user and system interactions to prevent unauthorized access and detect anomalous behavior. 2. Conduct comprehensive, scenario-driven employee training focused specifically on advanced AI-enabled threats, such as highly personalized synthetic phishing, deepfake impersonations, and prompt injection attempts, to build resilience against social engineering. 3. Deploy advanced AI-powered threat detection systems that utilize machine learning and behavioral analytics to identify subtle, evolving anomalous patterns in network traffic, system logs, and communications that are indicative of sophisticated AI-generated threats or malware.