Back to the MIT repository
6. Socioeconomic and Environmental2 - Post-deployment

Job Automation Instead of Augmentation

There are both positive and negative aspects to the impact of AI on labor. A White House report states that AI “has the potential to increase productivity, create new jobs, and raise living standards,” but it can also disrupt certain industries, causing significant changes, including job loss. Beyond risk of job loss, workers could find that generative AI tools automate parts of their jobs—or find that the requirements of their job have fundamentally changed. The impact of generative AI will depend on whether the technology is intended for automation (where automated systems replace human work) or augmentation (where AI is used to aid human workers). For the last two decades, rapid advances in automation have resulted in a “decline in labor share, stagnant wages[,] and the disappearance of good jobs in many advanced economies.”

Source: MIT AI Risk Repositorymit528

ENTITY

1 - Human

INTENT

1 - Intentional

TIMING

2 - Post-deployment

Risk ID

mit528

Domain lineage

6. Socioeconomic and Environmental

262 mapped risks

6.2 > Increased inequality and decline in employment quality

Mitigation strategy

1. Mandate fiscal and regulatory parity between physical and human capital investment by extending immediate expensing (e.g., full bonus depreciation) to all legitimate job-related worker training costs, thereby removing the current tax code bias that financially incentivizes automation over labor augmentation. 2. Implement continuous, data-driven workforce transition programs focused on comprehensive upskilling and reskilling, specifically targeting high-exposure/low-adaptive-capacity workers to cultivate expertise in roles requiring human-AI collaboration, complex problem-solving, and non-automatable cognitive tasks. 3. Establish robust AI Governance frameworks that require organizational transparency, including mandatory advance disclosure to employees regarding the purpose, data collection, and monitoring methods of any worker-impacting AI system, coupled with non-retaliatory channels for worker input on system design and deployment.