Harmful responses
Current Frontier AI mdoels amplify existing biases within their training data and can be manipulated into providing potentially harmful responses, for example abusive language or discriminatory responses91,92. This is not limited to text generation but can be seen across all modalities of generative AI93. Training on large swathes of UK and US English internet content can mean that misogynistic, ageist, and white supremacist content is overrepresented in the training data94.
ENTITY
1 - Human
INTENT
2 - Unintentional
TIMING
1 - Pre-deployment
Risk ID
mit912
Domain lineage
1. Discrimination & Toxicity
1.2 > Exposure to toxic content
Mitigation strategy
1. Implement rigorous Data Governance and Pre-training Scrutiny: Prior to model training, mandate a comprehensive data governance framework to ensure training datasets are diverse, representative, and undergo automated scrutiny using guard systems to filter and remove overrepresented harmful content (e.g., misogynistic, supremacist text). Concurrently, integrate algorithmic fairness techniques, such as Fair Representation Learning or reweighting, to mitigate inherent biases during the foundational stages of model development. 2. Deploy Multi-layered Runtime Guardrails and Output Filtering: Institute proactive, two-stage defense mechanisms during inference. This includes advanced input sanitization (prompt filtering) to detect and block adversarial or manipulative inputs that solicit harmful responses (jailbreaks), and a final-stage output filtering process to prevent the delivery of abusive or discriminatory language to the end-user. 3. Establish Continuous Bias Auditing and Monitoring: Initiate an ongoing assurance program involving regular, automated bias risk assessments and performance monitoring across diverse demographic groups (edge case analysis). This continuous auditing process is essential to detect model drift and emergent biases post-deployment, thereby providing the necessary feedback loop for timely model fine-tuning and retraining with fresh, de-biased data.