Back to the MIT repository
4. Malicious Actors & Misuse2 - Post-deployment

Appropriated Likeness

Use or alter a person's likeness or other identifying features

Source: MIT AI Risk Repositorymit1251

ENTITY

1 - Human

INTENT

1 - Intentional

TIMING

2 - Post-deployment

Risk ID

mit1251

Domain lineage

4. Malicious Actors & Misuse

223 mapped risks

4.3 > Fraud, scams, and targeted manipulation

Mitigation strategy

1. Implement robust technical safeguards, such as enhanced output filtering and adversarial training, to prevent Generative AI models from producing or disseminating unauthorized and realistic depictions of human likenesses (deepfakes, voice clones), aligning with a 'safety-by-design' paradigm. 2. Mandate and deploy digital provenance tracking mechanisms, such as cryptographic content credentials (e.g., C2PA) and imperceptible watermarking, on all Generative AI outputs to ensure the traceability and rapid identification of the source of appropriated likenesses. 3. Utilize and advocate for the strengthening of legal frameworks, including statutory and common law rights of publicity and appropriation torts, in conjunction with formal legal measures (e.g., cease and desist orders and civil litigation) to deter the non-consensual exploitation and commercial use of an individual's digital persona.