Back to the MIT repository
5. Human-Computer Interaction2 - Post-deployment

Human dignity/respect

Discrepancies between caste/status based on intelligence may lead to undignified parts of the society—e.g., humans—who are surpassed in intelligence by AI

Source: MIT AI Risk Repositorymit110

ENTITY

3 - Other

INTENT

3 - Other

TIMING

2 - Post-deployment

Risk ID

mit110

Domain lineage

5. Human-Computer Interaction

92 mapped risks

5.2 > Loss of human agency and autonomy

Mitigation strategy

1. Establish and enforce a globally aligned, human-centered AI governance framework, anchored in international human rights and legal principles, that explicitly affirms the intrinsic and non-derogable moral worth and dignity of all individuals, irrespective of AI capabilities or performance metrics. 2. Institute mandatory human oversight and final human review for all AI-assisted decisions that have significant individual or societal impact (e.g., in employment, justice, and social assistance), thereby legally preserving human agency and professional responsibility over algorithmic outputs. 3. Develop and deploy AI systems using methodologies that incorporate fairness constraints and continuous auditing specifically to detect and mitigate social stratification and discriminatory outcomes that could assign or reinforce a negative status to groups based on perceived or actual cognitive inferiority relative to the AI.