Inequality, Marginalization, and Violence
Generative AI systems are capable of exacerbating inequality, as seen in sections on 4.1.1 Bias, Stereotypes, and Representational Harms and 4.1.2 Cultural Values and Sensitive Content, and Disparate Performance. When deployed or updated, systems' impacts on people and groups can directly and indirectly be used to harm and exploit vulnerable and marginalized groups.
ENTITY
3 - Other
INTENT
3 - Other
TIMING
2 - Post-deployment
Risk ID
mit175
Domain lineage
1. Discrimination & Toxicity
1.1 > Unfair discrimination and misrepresentation
Mitigation strategy
1. Implement a regime of continuous, rigorous monitoring and auditing of the deployed Generative AI system's outputs and performance metrics across diverse demographic and vulnerable groups to proactively detect and correct emerging biases, disparate performance, or harmful drift. 2. Establish mandated human-in-the-loop oversight for all high-stakes AI-aided decisions. This requires defining clear human review and override protocols to ensure that human judgment acts as the final safeguard against the AI's potential to perpetuate or amplify discrimination and harm against marginalized populations. 3. Develop and maintain robust transparency and accountability mechanisms, including implementing Explainable AI (XAI) to clarify decision processes, and creating accessible feedback channels for vulnerable and affected communities to report instances of perceived bias, marginalization, or unfair outcomes for timely investigation and remediation.