Back to the MIT repository
5. Human-Computer Interaction2 - Post-deployment

Cultural harms

Cultural harm has been described as the development or use of algorithmic systems that affects cultural stability and safety, such as “loss of communication means, loss of cultural property, and harm to social values”

Source: MIT AI Risk Repositorymit154

ENTITY

2 - AI

INTENT

2 - Unintentional

TIMING

2 - Post-deployment

Risk ID

mit154

Domain lineage

5. Human-Computer Interaction

92 mapped risks

5.2 > Loss of human agency and autonomy

Mitigation strategy

1. Mandatory Post-Deployment Sociotechnical Audit and Remediation: Implement continuous, rigorous post-deployment monitoring and auditing using both technical fairness metrics and human-in-the-loop assessments to detect real-world representational harms, such as stereotyping or demeaning of cultural groups, and mandate immediate model recalibration or feature removal until the harm is demonstrably mitigated. 2. Integrate Affected Community Expertise and Governance: Establish formal, multidisciplinary governance structures, such as an ethical review board, that require the active participation and approval of representatives from the potentially harmed cultural communities in the design, risk assessment, and validation stages of the system's life cycle. 3. Eliminate Systemic Bias Proxies in Data: Conduct a deep sociotechnical analysis of the training data, features, and target outcomes to identify and eliminate proxies (e.g., historical search patterns, socio-economic factors) that implicitly encode and amplify existing systemic and cultural inequalities, ensuring the data is broadly representative and that the model's objective function aligns with equitable outcomes.

ADDITIONAL EVIDENCE

[An image search for 'thug' showing predominantly Black men] . . . It damages all the Black community because if you're damaging Black men, then you're hurting Black families