Back to the MIT repository
4. Malicious Actors & Misuse2 - Post-deployment

Cybercrime

The increasingly advanced capabilities and availability of general purpose AI models could be misused for improvements in efficiency and efficacy of cyber crimes. This is especially true for crimes that leverage IT systems, such as fraud144 (“cyber crime in the broader sense”).

Source: MIT AI Risk Repositorymit841

ENTITY

1 - Human

INTENT

1 - Intentional

TIMING

2 - Post-deployment

Risk ID

mit841

Domain lineage

4. Malicious Actors & Misuse

223 mapped risks

4.3 > Fraud, scams, and targeted manipulation

Mitigation strategy

1. Develop and deploy advanced, real-time threat intelligence and agentic monitoring systems (e.g., tailored classifiers and behavioral analytics) to proactively detect and disrupt the unauthorized use of general-purpose AI models for activities such as large-scale fraud, social engineering, and the creation of malicious content. 2. Mandate the adoption of fraud-resistant identity verification mechanisms, such as multi-factor authentication (MFA) and biometric-backed digital identity wallets, to fortify systems against highly realistic, AI-generated impersonation attacks (e.g., deepfakes and advanced spear phishing). 3. Institute a rigorous AI governance and risk management framework that requires continuous security assessments, enforces the principle of 'secure-by-design' throughout the AI lifecycle, and integrates human-in-the-loop protocols for oversight of AI system outputs and high-risk decisions.