Back to the MIT repository
4. Malicious Actors & Misuse2 - Post-deployment

Malicious Use of AI

Malicious utilization of AI has the potential to endanger digital security, physical security, and political security. International law enforcement entities grapple with a variety of risks linked to the Malevolent Utilization of AI.

Source: MIT AI Risk Repositorymit473

ENTITY

1 - Human

INTENT

1 - Intentional

TIMING

2 - Post-deployment

Risk ID

mit473

Domain lineage

4. Malicious Actors & Misuse

223 mapped risks

4.3 > Fraud, scams, and targeted manipulation

Mitigation strategy

1. Implement stringent, tiered access controls and "know-your-customer" screenings for powerful or dual-use AI models, particularly those with advanced cyber-attack or biological capabilities, to restrict proliferation to malicious actors. 2. Establish a comprehensive framework for continuous, real-time threat detection, including adversarially robust anomaly detection and continuous behavioral monitoring across training pipelines and inference endpoints to immediately identify and mitigate malicious utilization or data tampering. 3. Enforce a strict legal and organizational accountability regime on developers of general-purpose AI systems for potential misuse or failure, thereby fostering a safety-oriented organizational culture and encouraging proactive investment in security-by-design principles throughout the AI lifecycle.

ADDITIONAL EVIDENCE

AI can be employed to perpetrate a crime directly or subvert another AI system by tampering with the data