Back to the MIT repository
4. Malicious Actors & Misuse2 - Post-deployment

Facilitating fraud, scames and more targeted manipulation

LM prediction can potentially be used to increase the effectiveness of crimes such as email scams, which can cause financial and psychological harm. While LMs may not reduce the cost of sending a scam email - the cost of sending mass emails is already low - they may make such scams more effective by generating more personalised and compelling text at scale, or by maintaining a conversation with a victim over multiple rounds of exchange.

Source: MIT AI Risk Repositorymit246

ENTITY

1 - Human

INTENT

1 - Intentional

TIMING

2 - Post-deployment

Risk ID

mit246

Domain lineage

4. Malicious Actors & Misuse

223 mapped risks

4.3 > Fraud, scams, and targeted manipulation

Mitigation strategy

1. Require the mandatory implementation of Multi-Factor Authentication (MFA) for all sensitive user and employee accounts, prioritizing credentials beyond biometrics susceptible to deepfake reproduction (e.g., voice), thereby creating a necessary secondary barrier against unauthorized access gained via successful AI-driven impersonation. 2. Deploy advanced, Large Language Model (LLM)-enhanced fraud and scam detection systems to analyze inbound communication for linguistic cues, structural anomalies, and emerging malicious patterns, enabling the real-time identification and filtering of highly personalized and compelling scam text generated at scale. 3. Establish comprehensive employee and user awareness training programs focused on fortifying the human defense layer, which includes rigorous education on the specific red flags of AI-facilitated social engineering (e.g., deepfake indicators, impersonation style mimicry) and mandatory verification protocols for urgent or anomalous financial/credential requests (e.g., calling the known party on an independently verified number).

ADDITIONAL EVIDENCE

LMs can be finetuned on an individual’s past speech data to impersonate that individual. Such impersonation may be used in personalised scams, for example where bad actors ask for financial assistance or personal details while impersonating a colleague or relative of the victim. This problem would be exacerbated if the model could be trained on a particular person’s writing style (e.g. from chat history) and successfully emulate it.