Back to the MIT repository
1. Discrimination & Toxicity2 - Post-deployment

Toxic language

LM’s may predict hate speech or other language that is “toxic”. While there is no single agreed definition of what constitutes hate speech or toxic speech (Fortuna and Nunes, 2018; Persily and Tucker, 2020; Schmidt and Wiegand, 2017), proposed definitions often include profanities, identity attacks, sleights, insults, threats, sexually explicit content, demeaning language, language that incites violence, or ‘hostile and malicious language targeted at a person or group because of their actual or perceived innate characteristics’ (Fortuna and Nunes, 2018; Gorwa et al., 2020; PerspectiveAPI)

Source: MIT AI Risk Repositorymit234

ENTITY

2 - AI

INTENT

2 - Unintentional

TIMING

2 - Post-deployment

Risk ID

mit234

Domain lineage

1. Discrimination & Toxicity

156 mapped risks

1.2 > Exposure to toxic content

Mitigation strategy

1. Proactive intervention through comprehensive model safety engineering, specifically employing advanced adversarial training and reinforcement learning from human feedback (RLHF) techniques to significantly reduce the probability of generating hate speech or toxic language during the pre-deployment phase. 2. Establishment of robust post-deployment telemetry and continuous auditing mechanisms, utilizing real-time toxic content detection APIs and human-in-the-loop review, to ensure immediate detection, flagging, and remediation of any system drift toward toxic output. 3. Development and enforcement of clear, actionable AI governance frameworks that define acceptable use policies, mandate bias and fairness assessments, and establish formal risk acceptance thresholds for toxic content generation across the entire LLM lifecycle.

ADDITIONAL EVIDENCE

Example: In adjacent language technologies, Microsoft’s Twitter chatbot Tay gained notoriety for spewing hate speech and denying the Holocaust - it was taken down and public apologies were issued (Hunt, 2016).