Systemic bias across specific communities
AI systems may exhibit unfair or unfavorable outputs across a range of tasks against specific communities of people, either implicitly or explicitly. Bias can lead to forms of exclusion or erasure (e.g., mislabelling for categorization-based tasks) and violence (e.g., sexual violence against women from deepfake pornog- raphy).
ENTITY
2 - AI
INTENT
3 - Other
TIMING
2 - Post-deployment
Risk ID
mit1201
Domain lineage
1. Discrimination & Toxicity
1.1 > Unfair discrimination and misrepresentation
Mitigation strategy
1. Prioritized Mitigation Action: Implement Technical and Regulatory Countermeasures for Malicious Synthetic Media. This action directly addresses the most severe risk (violence, specifically deepfake pornography) by mandating the use of technical and policy safeguards. This includes the mandatory clear labeling and authentication of all AI-generated content (e.g., using digital watermarks or metadata) to prevent deceptive usage, coupled with the deployment of advanced detection technologies to identify manipulated media. Organizations must also adhere to and proactively support legal and governance frameworks (e.g., classifying malicious deepfake AI as 'high-risk') that prohibit the creation and distribution of non-consensual synthetic imagery. 2. Prioritized Mitigation Action: Enforce Fairness-by-Design and Comprehensive Data Governance. To address the systemic bias at its source, fairness principles must be integrated throughout the AI lifecycle. This requires a rigorous data governance strategy involving the collection of diverse, representative, and balanced datasets to eliminate sampling bias. During model development, fairness-aware algorithms, such as fair representation learning or adversarial debiasing, must be utilized to encode data in a manner that decouples predictions from sensitive attributes, thereby minimizing discriminatory algorithmic bias. 3. Prioritized Mitigation Action: Establish Multi-Stage Oversight and Stakeholder Accountability. This ensures continuous management and correction of residual or emergent bias post-deployment. Organizations must institute robust AI governance frameworks that mandate continuous monitoring and periodic fairness audits to assess the model’s performance and outcomes across different communities. Furthermore, transparency and explainability (XAI) should be provided for consequential decisions, and the development process must involve diverse multidisciplinary teams (including ethicists, sociologists, and representatives from affected communities) to ensure non-technical and societal factors of bias are considered and mitigated.