Back to the MIT repository
4. Malicious Actors & Misuse2 - Post-deployment

Cyberattacks

Generative AI facilitating the damage, disruption or destruction of a third-party system and/or its components via malfunction, cyberattacks, etc

Source: MIT AI Risk Repositorymit1364

ENTITY

2 - AI

INTENT

3 - Other

TIMING

2 - Post-deployment

Risk ID

mit1364

Domain lineage

4. Malicious Actors & Misuse

223 mapped risks

4.2 > Cyberattacks, weapon development or use, and mass harm

Mitigation strategy

1. Implement Advanced AI-Native Cybersecurity Platforms Deploy Generative AI-enabled security solutions to establish behavioral baselines (User and Entity Behavior Analytics - UEBA), facilitate proactive threat hunting, and provide real-time anomaly detection that surpasses traditional signature-based methods. This must be coupled with automated incident response capabilities to instantly isolate compromised segments and apply patches, thereby reducing the critical window of exposure in AI-speed attacks. 2. Establish Comprehensive Generative AI Governance and Model Security Institute resilient data governance and secure development practices throughout the Generative AI model lifecycle to prevent misuse and the creation of malicious outputs. Key technical measures include implementing AI Firewalls or output guardrails (e.g., rejection sampling, controlled decoding) to actively monitor and validate model inputs and outputs against security policies, along with rigorous vulnerability testing to secure the models themselves. 3. Mandate Specialized Employee Awareness and Simulation Training Develop and enforce mandatory security awareness training that specifically addresses the risks posed by hyper-realistic AI-powered social engineering attacks, such as deepfake audio/video and tailored phishing campaigns. These programs should utilize realistic, AI-generated simulation scenarios to enhance employees' critical thinking and ability to detect sophisticated, context-aware manipulation techniques.