Back to the MIT repository
2. Privacy & Security1 - Pre-deployment

Jailbreak in LLM Malicious Use - Backdoor Attack

However, there are still ones who can leave holes in the training dataset, making LLMs appear safe on average, but generate harmful content under other specific conditions. This kind of attack can be categorized as backdoor attack. Evan et al. developed a backdoor model that behaves as expected when trained, but exhibits different and potentially harmful behavior when deployed [81]. The results show that these backdoor behaviors persist even after multiple security training techniques are applied.

Source: MIT AI Risk Repositorymit1516

ENTITY

1 - Human

INTENT

1 - Intentional

TIMING

1 - Pre-deployment

Risk ID

mit1516

Domain lineage

2. Privacy & Security

186 mapped risks

2.2 > AI system security vulnerabilities and attacks

Mitigation strategy

1. Establish and enforce comprehensive Data Provenance and Integrity controls, including rigorous vetting of all external training/fine-tuning data sources and model weights, utilizing tools like ML-BOM (Bill of Materials) and version control (DVC) to track all data transformations and enable rapid rollback to a clean state. 2. Implement strict Access Controls based on the Principle of Least Privilege across the entire LLM development pipeline, including all data repositories and model weight modification interfaces, and institute continuous logging and auditing to prevent unauthorized data or parameter manipulation by malicious internal or external actors. 3. Conduct systematic Model Red Teaming and Adversarial Training during the pre-deployment phase, specifically designed to detect and mitigate backdoor vulnerabilities by testing for low-success-rate triggers across diverse attack modalities (e.g., Data Poisoning, Weight Poisoning, Chain-of-Thought Hijacking).