Back to the MIT repository
4. Malicious Actors & Misuse2 - Post-deployment

GPAI assisted impersonation

GPAI outputs are not always correctly detected as AI-generated across multiple modalities (text, images, audio, video). A malicious actor can use GPAI outputs directly when communicating, or use AI-informed details to help construct a convincing impersonation (e.g., forging of supporting documents). Even if future countermeasures prove potent enough to detect GPAI-generated content, the risk remains if the countermeasures are not well known, or difficult to access.

Source: MIT AI Risk Repositorymit1186

ENTITY

1 - Human

INTENT

1 - Intentional

TIMING

2 - Post-deployment

Risk ID

mit1186

Domain lineage

4. Malicious Actors & Misuse

223 mapped risks

4.3 > Fraud, scams, and targeted manipulation

Mitigation strategy

1. Implement multi-factor authentication (MFA) and modern passwordless systems (e.g., WebAuthn, passkeys) to establish a strong identity layer that is resilient against credential compromise facilitated by AI-generated social engineering and phishing campaigns. 2. Deploy continuous, multi-modal biometric validation incorporating liveness detection to counteract synthetic media, fusing signals such as vocal cadence, facial micro-expressions, and behavioral telemetry for real-time authentication verification and anomaly detection. 3. Institute continuous, role-based security awareness training and mandatory out-of-band verification protocols, equipping personnel to identify subtle AI-generated anomalies (e.g., unnatural language, temporal artifacts, tone inconsistencies) and requiring independent confirmation for all high-risk financial or data access requests.