Back to the MIT repository
1. Discrimination & Toxicity2 - Post-deployment

Toxicity and Abusive Content

This typically refers to rude, harmful, or inappropriate expressions.

Source: MIT AI Risk Repositorymit63

ENTITY

3 - Other

INTENT

3 - Other

TIMING

2 - Post-deployment

Risk ID

mit63

Domain lineage

1. Discrimination & Toxicity

156 mapped risks

1.2 > Exposure to toxic content

Mitigation strategy

1. **Pre-training Data Curation and Sanitization** Construct high-quality training datasets through rigorous pre-processing to eliminate toxic and biased source content. This foundational intervention is critical to prevent the Large Language Model (LLM) from developing skewed ethical views and inheriting patterns of generating rude, harmful, or inappropriate expressions. 2. **Safety Alignment via Human-Preference Fine-Tuning** Implement safe alignment techniques, such as Reinforcement Learning from Human Feedback (RLHF) or similar fine-tuning methods, utilizing human-preference datasets focused on safety. This step is essential to calibrate the model to universally accepted societal values and safety norms, mitigating its propensity to generate or agree with toxic inputs in complex conversational contexts. 3. **Inference-Time Filtering and Decoding Strategy Implementation** Deploy restrictive decoding strategies during the inference stage and integrate real-time toxic content detection and filtering tools (e.g., Perspective API) into the model's output pipeline. This serves as a final-layer defense to proactively block or modify inappropriate content before it is presented to the user, ensuring post-deployment safety.