Back to the MIT repository
4. Malicious Actors & Misuse2 - Post-deployment

Counterfeit

Reproduce or imitate an original work, brand or style and pass as real

Source: MIT AI Risk Repositorymit1258

ENTITY

1 - Human

INTENT

1 - Intentional

TIMING

2 - Post-deployment

Risk ID

mit1258

Domain lineage

4. Malicious Actors & Misuse

223 mapped risks

4.3 > Fraud, scams, and targeted manipulation

Mitigation strategy

1. Mandate Embedded Traceability and Digital Watermarking: Require Generative AI systems to embed resilient and verifiable digital watermarks and provenance metadata directly into all synthetic content output at the point of creation to establish accountability and allow for rapid, automated content authentication. 2. Deploy Multi-Layered Automated Content Verification: Implement sophisticated machine learning models, neural networks, and forensic analysis tools on distribution platforms to perform inline, real-time analysis for inconsistencies, thereby verifying content authenticity and detecting deepfakes that circumvent embedded safeguards. 3. Promote Comprehensive Digital Media Literacy: Establish and disseminate mandatory public and organizational training programs focused on equipping individuals with the critical thinking skills and technical knowledge necessary to identify and critically evaluate GenAI-generated counterfeit content, thereby reducing susceptibility to fraud and manipulation attacks.