Cyberspace risks (Risks of abuse for cyberattacks)
AI can be used in launching automatic cyberattacks or increasing attack efficiency, including exploring and making use of vulnerabilities, cracking passwords, generating malicious codes, sending phishing emails, network scanning, and social engineering attacks. All these lower the threshold for cyberattacks and increase the difficulty of security protection.
ENTITY
1 - Human
INTENT
1 - Intentional
TIMING
2 - Post-deployment
Risk ID
mit697
Domain lineage
4. Malicious Actors & Misuse
4.2 > Cyberattacks, weapon development or use, and mass harm
Mitigation strategy
1. Implement an AI-Native Cyber Defense Strategy: Deploy a comprehensive, AI-powered cybersecurity platform to leverage Machine Learning and Generative AI for continuous, real-time threat detection, anomaly identification (User and Entity Behavior Analytics, UEBA), and predictive threat intelligence. This proactive approach is essential for counteracting the speed and scale of automated, AI-enabled attacks. 2. Mandate and Govern AI Security Compliance: Establish mandatory AI Security Compliance programs and integrate them within an overarching AI Risk Management Framework (e.g., NIST AI RMF). This requires considering attack surfaces during AI system deployment, adopting stringent IT-reforms, and ensuring continuous model auditing and validation to secure the AI supply chain against adversarial manipulation and data integrity risks. 3. Develop an AI-Specific Incident Response and Training Protocol: Create and regularly test an incident response plan specifically designed to contain and remediate rapid, AI-driven breaches. Furthermore, institute specialized employee awareness training modules that focus on recognizing highly convincing AI-enabled social engineering techniques, such as deepfake audio/chat and sophisticated phishing, to address the human element vulnerability.