Back to the MIT repository
4. Malicious Actors & Misuse2 - Post-deployment

Impersonation

Assume the identity of a real person and take actions on their behalf

Source: MIT AI Risk Repositorymit1250

ENTITY

1 - Human

INTENT

1 - Intentional

TIMING

2 - Post-deployment

Risk ID

mit1250

Domain lineage

4. Malicious Actors & Misuse

223 mapped risks

4.3 > Fraud, scams, and targeted manipulation

Mitigation strategy

- Implement continuous, multi-modal biometric and behavioral authentication, incorporating real-time liveness detection. This necessitates fusing signals (e.g., facial geometry, vocal cadence, device telemetry) to create an ensemble model, thereby significantly elevating the complexity required for a successful adversarial spoofing attempt. - Establish and rigidly enforce strict out-of-band verification protocols for all high-risk, non-routine, or urgent requests involving data or financial transfer. This mandates contacting the purported individual through a pre-validated, alternate communication channel (e.g., known corporate phone number, pre-shared passphrase) to confirm identity and intent prior to execution. - Institute mandatory, scenario-based security awareness training for all personnel, focusing specifically on recognizing the telltale temporal, visual, and acoustic artifacts of deepfake media, as well as the psychological exploitation tactics (e.g., manufactured urgency, appeals to secrecy) common in AI-enabled impersonation scams.