Back to the MIT repository
4. Malicious Actors & Misuse2 - Post-deployment

On Purpose - Post Deployment

Just because developers might succeed in creating a safe AI, it doesn't mean that it will not become unsafe at some later point. In other words, a perfectly friendly AI could be switched to the dark side during the post-deployment stage. This can happen rather innocuously as a result of someone lying to the AI and purposefully supplying it with incorrect information or more explicitly as a result of someone giving the AI orders to perform illegal or dangerous actions against others.

Source: MIT AI Risk Repositorymit609

ENTITY

1 - Human

INTENT

1 - Intentional

TIMING

2 - Post-deployment

Risk ID

mit609

Domain lineage

4. Malicious Actors & Misuse

223 mapped risks

4.3 > Fraud, scams, and targeted manipulation

Mitigation strategy

1. Implement layered technical controls, including stringent input validation, sanitization, and real-time malicious prompt detection (e.g., user-system-alignment protocols) at inference endpoints to prevent immediate manipulation or execution of illegal/dangerous instructions. 2. Deploy continuous, automated post-deployment monitoring systems to track model behavior and performance in real-time, including detection of statistical outliers, anomalous API calls, and unusual query patterns indicative of evolving misuse or subtle manipulation attempts. 3. Establish an adaptive learning and governance framework that incorporates automated drift detection (e.g., concept and data drift) and triggers timely, systematic model retraining with verified, clean data sets to counteract performance degradation caused by purposefully supplied incorrect information.