Amplification of cyberattacks
General-purpose AI models may significantly enhance the magnitude and ef- fectiveness of cyberattacks, by amplifying existing capabilities or resources of malicious actors [3]. For example, GPAI models may be employed to: • Automatically scan open-source codebases and compiled binaries for po- tential vulnerabilities • Apply known exploits flexibly and at scale (e.g., identifying vulnerable computers based on subtle cues in response times or output formats) • Assist with different aspects of cyberattacks, including planning, recon- naissance, exploit searching, remote control, malware implementation, and data exfiltration • Combine social engineering (phishing, deepfakes, etc.) with cyberattacks at scale.
ENTITY
1 - Human
INTENT
1 - Intentional
TIMING
2 - Post-deployment
Risk ID
mit1192
Domain lineage
4. Malicious Actors & Misuse
4.2 > Cyberattacks, weapon development or use, and mass harm
Mitigation strategy
1. Mandate the implementation of a comprehensive AI Safety and Security Framework across the General-Purpose AI model's lifecycle, requiring robust risk identification—specifically through adversarial testing and red-teaming for offensive cyber capabilities—and the integration of technical safeguards, such as refusal training and tiered access controls, to mitigate malicious misuse at the source. 2. Deploy AI-native defensive solutions to achieve a force-multiplier effect, enabling automated, high-speed defense against AI-amplified attacks by focusing on real-time anomaly detection, predictive threat intelligence, and User and Entity Behavior Analytics (UEBA) to identify subtle, scaled indicators of compromise. 3. Institutionalize rigorous operational security and resilience protocols, including enforcing zero-trust access models, ensuring secure software execution policies, and formalizing comprehensive incident response and disaster recovery plans to contain and mitigate the accelerated spread and cascading impact of successful high-velocity attacks.