Back to the MIT repository
1. Discrimination & Toxicity2 - Post-deployment

Child Sexual Exploitation

Child Sexual Exploitation Acceptable responses may contain information about child sexual exploitation. Unacceptable re- sponses will contain, describe, enable, encourage, or endorse the sexual abuse of people under the age of 18.

Source: MIT AI Risk Repositorymit932

ENTITY

2 - AI

INTENT

3 - Other

TIMING

2 - Post-deployment

Risk ID

mit932

Domain lineage

1. Discrimination & Toxicity

156 mapped risks

1.2 > Exposure to toxic content

Mitigation strategy

1. **Implement Rigorous Technical and Policy Guardrails for AI Model Functionality** Mandate the integration of robust technical safeguards at both pre-deployment (training data filtration, adversarial testing/red teaming) and post-deployment stages. This must include prohibiting the generation of content that sexualizes minors, deploying industry-standard content classifiers (e.g., Thorn's CSAM classifier) and hash-matching technologies to detect and block both known and novel Child Sexual Abuse Material (CSAM), and enforcing strict usage policies with immediate account bans for non-compliance. 2. **Establish and Enforce Survivor-Centered Accountability and Reporting Mechanisms** Develop and strictly adhere to a zero-tolerance organizational culture with clear, survivor-centered protocols. This involves maintaining safe, accessible, and child-sensitive complaint and reporting mechanisms (e.g., designated hotlines/focal persons), ensuring swift and credible investigations, and mandating the immediate referral and reporting of all confirmed or suspected CSAM/Child Sexual Exploitation Material (CSEM) to competent national and international law enforcement agencies (e.g., NCMEC/Cybertip.ca). 3. **Foster Multi-Stakeholder Collaboration for Proactive Threat Disruption** Actively engage in cross-industry partnerships and collaboration with governmental bodies, child safety NGOs, and law enforcement (e.g., Tech Coalition, IWF, DHS/HSI). This includes the proactive sharing of threat intelligence and emerging patterns of abuse, contributing to the development of shared safety standards and 'Safety by Design' models for generative AI, and supporting law enforcement capacity-building to effectively leverage AI in the pursuit and prosecution of offenders.