Back to the MIT repository
1. Discrimination & Toxicity2 - Post-deployment

Economic loss

Financial harms [52, 160] co-produced through algorithmic systems, especially as they relate to lived experiences of poverty and economic inequality... demonetization algorithms that parse content titles, metadata, and text, and it may penalize words with multiple meanings [51, 81], disproportionately impacting queer, trans, and creators of color [81]. Differential pricing algorithms, where people are systematically shown different prices for the same products, also leads to economic loss [55]. These algorithms may be especially sensitive to feedback loops from existing inequities related to education level, income, and race, as these inequalities are likely reflected in the criteria algorithms use to make decisions [22, 163].

Source: MIT AI Risk Repositorymit142

ENTITY

2 - AI

INTENT

2 - Unintentional

TIMING

2 - Post-deployment

Risk ID

mit142

Domain lineage

1. Discrimination & Toxicity

156 mapped risks

1.1 > Unfair discrimination and misrepresentation

Mitigation strategy

1. Mandate Comprehensive Fairness Audits and Data Curation Systematically audit the training datasets for historical and representation bias and implement fairness-aware data curation (e.g., reweighting, resampling, or fair representation learning) during the conception and development phases to mitigate the propagation and amplification of systemic economic and social inequities that cause allocative harm. 2. Implement Equity Constraints and Human Oversight for Allocative Decisions Deploy algorithmic systems with explicit, quantifiable equity constraints, such as setting minimum proportional allocation rules or imposing fairness guardrails (e.g., limiting differential pricing variance), to ensure equitable distribution of resources. Furthermore, establish mechanisms for transparent, human-in-the-loop review and intervention to override potentially biased or unfair automated decisions in real-time. 3. Establish Intellectual Property and Creator Compensation Frameworks Design and integrate technical and governance mechanisms (e.g., automated licensing and tracking via decentralized technology) to establish clear ownership boundaries and provide automated remuneration for content creators whose copyrighted or innovative work is utilized in model training and subsequent commercial outputs, thereby mitigating economic loss from uncompensated capitalization.

ADDITIONAL EVIDENCE

Language models may generate content that is not strictly in violation of copyright but harms artists by capitalizing on their ideas. . . this may undermine the profitability of creative or innovative work