Back to the MIT repository
4. Malicious Actors & Misuse2 - Post-deployment

Falisification

Fabricate or falsely represent evidence, incl. reports, IDs, documents

Source: MIT AI Risk Repositorymit1256

ENTITY

1 - Human

INTENT

1 - Intentional

TIMING

2 - Post-deployment

Risk ID

mit1256

Domain lineage

4. Malicious Actors & Misuse

223 mapped risks

4.1 > Disinformation, surveillance, and influence at scale

Mitigation strategy

1. Establish a robust provenance and content authenticity infrastructure, employing cryptographic signing, content credentials, and mandatory watermarking for all generative AI outputs to verify the chain-of-custody and communicate authenticity signals to downstream consumers and systems. 2. Implement standardized, forensic verification protocols for high-stakes digital artifacts, including reports, identification documents, and legal evidence, mandating the cross-referencing of metadata integrity and demand for original source files (native formats) to validate authenticity and detect falsification. 3. Develop and deploy comprehensive digital media literacy campaigns designed to enhance user skepticism, provide critical evaluation frameworks, and equip the public with knowledge to recognize and refute manipulative strategies associated with sophisticated, GenAI-generated disinformation.