Back to the MIT repository
4. Malicious Actors & Misuse2 - Post-deployment

Nonconsensual use

Generative AI models might be intentionally used to imitate people through deepfakes by using video, images, audio, or other modalities without their consent.

Source: MIT AI Risk Repositorymit1302

ENTITY

1 - Human

INTENT

1 - Intentional

TIMING

2 - Post-deployment

Risk ID

mit1302

Domain lineage

4. Malicious Actors & Misuse

223 mapped risks

4.3 > Fraud, scams, and targeted manipulation

Mitigation strategy

1. Mandate and enforce upstream technical safeguards at the model development and deployment level to actively prevent the creation of nonconsensual intimate imagery (NCII) and child sexual abuse material (CSAM), ensuring these guardrails are robust against circumvention techniques such as jailbreaking. 2. Develop and integrate sophisticated deepfake detection and media authentication mechanisms—including digital watermarking, metadata analysis, and AI-based forensic tools (e.g., CNNs, Autoencoders)—across consumer-facing platforms to reliably identify and label synthetic content, thereby preserving media authenticity and public trust. 3. Establish comprehensive legal and regulatory frameworks that impose liability across the AI supply chain (model developers, providers, and users) for the creation and dissemination of nonconsensual deepfakes, alongside granting robust legal recourse for victims to secure content removal and seek civil or criminal damages.