Back to the MIT repository
4. Malicious Actors & Misuse2 - Post-deployment

Violation of personal integrity

Non-consensual use of one’s personal identity or likeness for unauthorised purposes (e.g. commercial purposes)

Source: MIT AI Risk Repositorymit275

ENTITY

1 - Human

INTENT

1 - Intentional

TIMING

2 - Post-deployment

Risk ID

mit275

Domain lineage

4. Malicious Actors & Misuse

223 mapped risks

4.3 > Fraud, scams, and targeted manipulation

Mitigation strategy

1. Mandate and Enforce Content Provenance and Authentication Require all generative AI services to implement robust technical standards, such as those prescribed by the Coalition for Content Provenance and Authentication (C2PA), to embed verifiable metadata (content credentials) and imperceptible watermarks within all generated media (image, video, and audio). This ensures the authenticity and origin of digital content can be programmatically verified to combat the non-consensual dissemination and malicious use of deepfakes. 2. Deploy Advanced Multi-Modal Deepfake Detection and Liveness Systems Organizations must integrate and continuously refine real-time, AI-powered multi-modal detection tools—including liveness detection, forensic voice analysis, and visual artifact analysis—at all critical interaction points (e.g., account access, financial authorization, and secure communications) to reliably distinguish genuine human likeness or voice from sophisticated synthetic media. This should be coupled with Multi-Factor Authentication (MFA) that avoids voice biometrics as the sole or primary factor. 3. Establish Comprehensive Ethical Literacy and Incident Response Protocols Implement mandatory, recurring ethical literacy and digital skepticism training programs for all personnel and stakeholders, focusing on the specific visual and auditory forensic markers of emerging deepfake technologies. Concurrently, establish clear, non-punitive internal reporting mechanisms, peer-coaching structures, and pre-scripted response protocols for employees confronting ethically ambiguous or suspicious requests, particularly those involving identity impersonation.

ADDITIONAL EVIDENCE

Example: Generating a deepfake image, video, or audio of someone without their consent (Hunter, 2023)*