Back to the MIT repository
4. Malicious Actors & Misuse2 - Post-deployment

Cyber offence

Attackers are beginning to use general- purpose AI for offensive cyber operations, presenting growing but currently limited risks. Current systems have demonstrated capabilities in low- and medium- complexity cybersecurity tasks, with state- sponsored threat actors actively exploring AI to survey target systems. Malicious actors of varying skill levels can leverage these capabilities against people, organisations, and critical infrastructure such as power grids.

Source: MIT AI Risk Repositorymit1022

ENTITY

1 - Human

INTENT

1 - Intentional

TIMING

2 - Post-deployment

Risk ID

mit1022

Domain lineage

4. Malicious Actors & Misuse

223 mapped risks

4.2 > Cyberattacks, weapon development or use, and mass harm

Mitigation strategy

1. Deploy AI-native cybersecurity platforms with continuous monitoring, leveraging advanced machine learning for User and Entity Behavior Analytics (UEBA) to establish baselines and detect non-deterministic, novel adversarial activity with a high degree of fidelity against both organizational and critical infrastructure systems. 2. Develop and implement an automated, AI-driven incident response framework to ensure rapid threat containment, including the isolation of compromised endpoints and the automated application of remedial controls, thereby accelerating the Mean Time to Respond (MTTR) for sophisticated cyber incidents. 3. Impose stringent governance and access-control policies, utilizing mechanisms like 'know-your-customer' protocols and secure, monitored API access, to limit the availability of high-capability General-Purpose AI (GPAI) models that could be leveraged by malicious actors to scale sophisticated cyber offensives.