Back to the MIT repository
4. Malicious Actors & Misuse3 - Other

Terrorist access

Powerful AI technologies may fall into the hands of terrorists.

Source: MIT AI Risk Repositorymit1087

ENTITY

1 - Human

INTENT

3 - Other

TIMING

3 - Other

Risk ID

mit1087

Domain lineage

4. Malicious Actors & Misuse

223 mapped risks

4.2 > Cyberattacks, weapon development or use, and mass harm

Mitigation strategy

1. Implement rigorous **restricted access controls** for highly capable AI systems, mandating controlled interactions through secure cloud services and utilizing **Know-Your-Customer (KYC) screening** to prevent unauthorized acquisition by non-state malicious actors. 2. Enforce the **removal of dual-use capabilities**, such as advanced biological research or autonomous cyber operation tools, from general-purpose AI models, and apply **strict access management** to all specialized AIs retaining such high-risk functionalities. 3. Establish a **strict legal liability regime** for developers of general-purpose AI to incentivize safer practices, and mandate the implementation of **state-of-the-art information security** and continuous **adversarial testing** to secure model weights against theft or accidental leakage.