Back to the MIT repository
4. Malicious Actors & Misuse2 - Post-deployment

Malicious use and abuse (cyberattacks)

Generative AI can help amplify the frequency and destructiveness of cyberattacks.311 It has the capacity “to increase the accessibility, success rate, scale, speed, stealth, and potency of cyberattacks. It enables the identification of critical vulnerabilities within targeted systems, facilitates the increase of the scale of cyberattacks, and accelerates the process by discovering innovative methods of system infiltration. Cyberattacks can inflict significant damage and may impact critical infrastructure, including electrical grids, financial systems, and weapons management systems.

Source: MIT AI Risk Repositorymit730

ENTITY

1 - Human

INTENT

1 - Intentional

TIMING

2 - Post-deployment

Risk ID

mit730

Domain lineage

4. Malicious Actors & Misuse

223 mapped risks

4.2 > Cyberattacks, weapon development or use, and mass harm

Mitigation strategy

1. Establish a Comprehensive AI Governance and Risk Management Framework Develop and deploy a formal AI Governance Framework (e.g., aligned with NIST AI RMF) that outlines ethical standards, legal compliance, defined roles and responsibilities, and clear protocols for risk assessment and accountability across the entire AI system lifecycle. 2. Implement Proactive and Adaptive AI-Native Cybersecurity Solutions Integrate AI-native cybersecurity platforms that leverage machine learning for enhanced threat detection (e.g., real-time anomaly detection and User and Entity Behavior Analytics or UEBA), automated incident response and orchestration (SOAR), and proactive vulnerability identification to counter the increased sophistication and speed of AI-driven attacks. 3. Mandate Continuous Adversarial Testing and Resilience Engineering Systematically conduct red-teaming and adversarial testing exercises, including prompt manipulation and model inversion attacks, on all generative AI models and integrations to proactively identify and mitigate vulnerabilities, thereby ensuring model robustness and resilience against evolving AI-enabled threat vectors.