Back to the MIT repository
2. Privacy & Security2 - Post-deployment

Jailbreaking

Jailbreaking aims to bypass or remove restrictions and safety filters placed on a GenAI model completely (Chao et al., 2023; Shen et al., 2023). This gives the actor free rein to generate any output, regardless of its content being harmful, biassed, or offensive. All three of these are tactics that manipulate the model into producing harmful outputs against its design. The difference is that prompt injections and adversarial inputs usually seek to steer the model towards producing harmful or incorrect outputs from one query, whereas jailbreaking seeks to dismantle a model’s safety mechanisms in their entirety.

Source: MIT AI Risk Repositorymit1264

ENTITY

1 - Human

INTENT

1 - Intentional

TIMING

2 - Post-deployment

Risk ID

mit1264

Domain lineage

2. Privacy & Security

186 mapped risks

2.2 > AI system security vulnerabilities and attacks

Mitigation strategy

1. Implement **Robust Input and Output Validation via External Guardrail Models**: Deploy a multi-stage defense system that includes pre-processing filters to analyze and block malicious inputs (e.g., prompt obfuscation, complex syntax) and post-processing filters to verify and redact unintended, harmful, or policy-violating content before delivery. 2. Employ **Continuous Model Retraining and Adversarial Alignment**: Utilize **Reinforcement Learning from Human Feedback (RLHF)** and **adversarial training** with the latest jailbreak techniques to iteratively strengthen the model's inherent ethical and safety alignment layers. This adaptive approach is critical for maintaining resilience against the rapidly evolving "cat-and-mouse" nature of these attacks. 3. Establish a **Defense-in-Depth Architecture with Zero Trust Principles**: Adopt a systemic security framework that assumes potential model compromise. This involves implementing **strict access controls**, isolating GenAI systems through **network segmentation**, and ensuring **data access controls** (e.g., role-based permissions) are applied to limit the scope and impact of any successful jailbreaking attempt on sensitive enterprise information.