Back to the MIT repository
6. Socioeconomic and Environmental2 - Post-deployment

Labor & material/Macro-socio economic harms

Algorithmic systems can increase “power imbalances in socio-economic relations” at the societal level [4, 137, p. 182], including through exacerbating digital divides and entrenching systemic inequalities [114, 230]. The development of algorithmic systems may tap into and foster forms of labor exploitation [77, 148], such as unethical data collection, worsening worker conditions [26], or lead to technological unemployment [52], such as deskilling or devaluing human labor [170]... when algorithmic financial systems fail at scale, these can lead to “flash crashes” and other adverse incidents with widespread impacts

Source: MIT AI Risk Repositorymit156

ENTITY

3 - Other

INTENT

3 - Other

TIMING

2 - Post-deployment

Risk ID

mit156

Domain lineage

6. Socioeconomic and Environmental

262 mapped risks

6.1 > Power centralization and unfair distribution of benefits

Mitigation strategy

1. **Prioritize Proactive Technological Unemployment Mitigation and Workforce Augmentation** Invest in comprehensive, collaborative, and long-term workforce reskilling and upskilling programs (e.g., worker retraining accounts, industry-government partnerships) to transition workers into technology-augmented roles rather than replacing human labor. This strategy directly addresses deskilling and job displacement by fostering continuous learning and a resilient labor market during periods of high automation. 2. **Establish and Enforce Algorithmic Management Governance and Fairness Constraints** Implement strict governance frameworks for all algorithmic systems used in employment (e.g., scheduling, performance evaluation) to prevent labor exploitation and the worsening of worker conditions. These frameworks must mandate transparency in decision-making criteria, integrate explicit fairness constraints to ensure equitable distribution of work, and require continuous auditing and human oversight to prevent the amplification of socioeconomic harms in the workplace. 3. **Restructure Algorithmic Development through Participatory Design and Bias Mitigation** Address Algorithmic Power Asymmetry and systemic inequalities by moving beyond simple bias detection and adopting participatory design methodologies. This requires engaging diverse stakeholders, including members from underrepresented populations, in the problem framing, development, and governance of AI systems to ensure unequal control over algorithms does not reinforce existing social or economic divides. Rigorously employ fairness-aware algorithms and diverse training data throughout the AI lifecycle.

ADDITIONAL EVIDENCE

Harms associated with the labour and material supply chains of AI technologies, beta testing, and commercial exploitation”