Back to the MIT repository
4. Malicious Actors & Misuse2 - Post-deployment

Harm to individuals through fake content

General- purpose AI systems can be used to increase the scale and sophistication of scams and fraud, for example through general- purpose AI- enhanced ‘phishing’ attacks. General- purpose AI can be used to generate fake compromising content featuring individuals without their consent, posing threats to individual privacy and reputation.

Source: MIT AI Risk Repositorymit769

ENTITY

1 - Human

INTENT

1 - Intentional

TIMING

2 - Post-deployment

Risk ID

mit769

Domain lineage

4. Malicious Actors & Misuse

223 mapped risks

4.3 > Fraud, scams, and targeted manipulation

Mitigation strategy

1. Implement comprehensive technical safeguards within general-purpose AI systems, including Content Provenance and Authentication (CPA) standards (e.g., C2PA) and robust semantic guardrails on both input prompts and generated output, to actively prevent the creation of non-consensual fake content, deepfakes, and other illegal materials. 2. Mandate and deploy Multi-Factor Authentication (MFA) across all user accounts and enterprise systems to establish a critical defense layer, thereby neutralizing the effectiveness of AI-enhanced phishing attacks designed to harvest login credentials for account takeover and fraud. 3. Establish and promote rigorous independent verification protocols for any unexpected or urgent requests for personal data, funds, or sensitive actions—particularly those delivered via voice, video, or highly personalized text—by requiring users to contact the alleged sender through a pre-verified, trusted communication channel (e.g., a known, official phone number or in-person code word).