Back to the MIT repository
4. Malicious Actors & Misuse2 - Post-deployment

Deception

AI has become very good at creating fake content. From text to photos, audio and video. The name Deep Fake refers to content that is fake at such a level of complexity that our mind rules out the possibility that it is fake.

Source: MIT AI Risk Repositorymit91

ENTITY

2 - AI

INTENT

3 - Other

TIMING

2 - Post-deployment

Risk ID

mit91

Domain lineage

4. Malicious Actors & Misuse

223 mapped risks

4.3 > Fraud, scams, and targeted manipulation

Mitigation strategy

1. Proactive Implementation of Content Provenance and AI Watermarking Integrate cryptographic and machine-learning-based watermarking schemes (e.g., SynthID) directly into generative model outputs to embed traceable, resilient signals, thereby establishing content authenticity and provenance at the point of creation, which is a critical shift from purely reactive post-hoc detection. 2. Deployment of Multimodal Deepfake Detection and Robust Authentication Protocols Systematically deploy AI/ML-powered detection tools that analyze visual, audio, and textual inconsistencies across various media types, coupled with the mandatory implementation of redundant verification controls (e.g., Multi-Factor Authentication, secure cross-channel verification) for all high-risk or sensitive transactions and communications. 3. Establishment of a Comprehensive Deepfake Risk Management Framework and Literacy Program Develop and routinely exercise a dedicated deepfake incident response and recovery plan with clearly defined escalation pathways. Concurrently, mandate continuous security awareness and media literacy training for all personnel to cultivate a culture of critical evaluation and strengthen the human firewall against social engineering and digital manipulation.