Ethical Risks (Risks of exacerbating social discrimination and prejudice, and widening the intelligence divide)
AI can be used to collect and analyze human behaviors, social status, economic status, and individual personalities, labeling and categorizing groups of people to treat them discriminatingly, thus causing systematic and structural social discrimination and prejudice. At the same time, the intelligence divide would be expanded among regions.
ENTITY
1 - Human
INTENT
1 - Intentional
TIMING
2 - Post-deployment
Risk ID
mit704
Domain lineage
6. Socioeconomic and Environmental
6.2 > Increased inequality and decline in employment quality
Mitigation strategy
1. Establish a Rigorous AI Governance and Fairness-by-Design Framework Mandate the integration of non-discrimination and equity principles into every phase of the AI lifecycle, instituting formal governance structures that define accountability and ensure continuous human oversight (Human-in-the-Loop) for high-consequence decisions to systematically mitigate the amplification of structural biases. 2. Implement Comprehensive Algorithmic and Data Auditing Protocols Require continuous fairness testing using technical metrics (e.g., demographic parity, equal opportunity) across diverse demographic subgroups to proactively detect and remediate bias. This must be coupled with rigorous auditing of training datasets to ensure diversity, representation, and the absence of features that serve as discriminatory proxies for protected characteristics. 3. Strategically Invest in Infrastructure and Digital Literacy to Narrow the AI Divide Implement targeted policy interventions, including subsidized programs for broadband expansion and reliable electricity in underserved regions, alongside the systemic integration of digital literacy and AI skills training into educational and vocational frameworks to ensure equitable access to and benefits from AI technologies, thereby preventing the widening of socioeconomic and regional inequalities.