Back to the MIT repository
6. Socioeconomic and Environmental3 - Other

Building an AI able to adapt to humans

This category involves almost 9% of the articles and deals with ethical concerns arising from AI's capacity to interact with humans in the workplace.

Source: MIT AI Risk Repositorymit585

ENTITY

3 - Other

INTENT

3 - Other

TIMING

3 - Other

Risk ID

mit585

Domain lineage

6. Socioeconomic and Environmental

262 mapped risks

6.2 > Increased inequality and decline in employment quality

Mitigation strategy

1. Implement mandatory, periodic AI ethics audits of algorithms and training data to proactively identify and correct systemic bias, especially in high-stakes human resource applications (e.g., hiring and performance evaluation), thereby mitigating the risk of increased inequality and ensuring fair outcomes across all protected characteristics. 2. Establish and enforce clear policies for transparency and explainability, requiring organizations to make AI system logic comprehensible, communicate how algorithmic decisions impact employees, and provide defined channels for contesting or appealing AI-generated outcomes. 3. Mandate continuous human oversight ("Human-in-the-Loop") for all critical AI-assisted decisions, coupled with a robust governance framework that clearly assigns accountability to specific human roles (e.g., an AI Ethics Officer or Review Board) to prevent automation bias and uphold human agency in the workplace.