Back to the MIT repository
4. Malicious Actors & Misuse2 - Post-deployment

Privacy and consent

Even when a victim of targeted, AIgenerated harms successfully identifies a deepfake creator with malicious intent, they may still struggle to redress many harms because the generated image or video isn’t the victim, but instead a composite image or video using aspects of multiple sources to create a believable, yet fictional, scene. At their core, these AI-generated images and videos circumvent traditional notions of privacy and consent: because they rely on public images and videos, like those posted on social media websites, they often don’t rely on any private information.

Source: MIT AI Risk Repositorymit518

ENTITY

1 - Human

INTENT

1 - Intentional

TIMING

2 - Post-deployment

Risk ID

mit518

Domain lineage

4. Malicious Actors & Misuse

223 mapped risks

4.3 > Fraud, scams, and targeted manipulation

Mitigation strategy

1. Advance the development and global adoption of legally binding technical standards (e.g., C2PA) to mandate cryptographic provenance, traceability, and watermarking for all Generative AI systems and their output. This is critical for establishing content authenticity, assigning accountability, and providing a verifiable technical basis for legal redress. 2. Prioritize research into and deployment of generalizable deepfake detection architectures (e.g., hybrid, ensemble, and forensic-based AI models) capable of accurately identifying composite, novel, and adversarial synthetic artifacts that are designed to evade current detection systems. 3. Establish robust organizational protocols, including employee media literacy training focused on identifying deepfake red flags, and the mandatory implementation of multi-factor authentication (MFA) methods that are resilient against biometric impersonation, such as hardware security tokens or behavioral biometrics, to prevent deepfake-enabled fraud and impersonation.