Back to the MIT repository
4. Malicious Actors & Misuse2 - Post-deployment

Type 5: Criminal weaponization

One or more criminal entities could create AI to intentionally inflict harms, such as for terrorism or combating law enforcement.

Source: MIT AI Risk Repositorymit05

ENTITY

1 - Human

INTENT

1 - Intentional

TIMING

2 - Post-deployment

Risk ID

mit05

Domain lineage

4. Malicious Actors & Misuse

223 mapped risks

4.2 > Cyberattacks, weapon development or use, and mass harm

Mitigation strategy

1. Implement a secure-by-design AI development lifecycle (AI-SDLC), incorporating rigorous input and output validation, adversarial training, and model hardening techniques such as encryption and differential privacy to proactively prevent the intentional manipulation or theft of AI models for criminal weaponization. 2. Establish continuous, real-time monitoring of deployed AI systems for anomalous behavior, usage pattern shifts, and integrity breaches to rapidly detect and respond to indicators of misuse, resource jacking, or unauthorized re-purposing of the technology. 3. Institute mandatory AI Security Compliance programs for high-risk and governmental AI uses, complemented by robust cross-sector collaboration to facilitate the timely sharing of threat intelligence between industry, regulators, and law enforcement to counter AI-enabled crime at scale.

ADDITIONAL EVIDENCE

It’s not difficult to envision AI technology causing harm if it falls into the hands of people looking to cause trouble, so no stories will be provided in this section. It is enough to imagine an algorithm designed to pilot delivery drones that could be re-purposed to carry explosive charges, or an algorithm designed to deliver therapy that could have its goal altered to deliver psychological trauma.