Back to the MIT repository
6. Socioeconomic and Environmental2 - Post-deployment

Societal System Harms

Social system or societal harms reflect the adverse macro-level effects of new and reconfigurable algorithmic systems, such as systematizing bias and inequality [84] and accelerating the scale of harm [137]

Source: MIT AI Risk Repositorymit152

ENTITY

2 - AI

INTENT

2 - Unintentional

TIMING

2 - Post-deployment

Risk ID

mit152

Domain lineage

6. Socioeconomic and Environmental

262 mapped risks

6.0 > Socioeconomic & Environmental

Mitigation strategy

1. Establish a comprehensive, multidisciplinary AI Governance Framework that mandates the assessment of systemic risks and potential macro-level societal impacts. This includes implementing mandatory Algorithmic Impact Assessments (AIAs) and assigning clear accountability for socio-technical harms to counter the systematization of bias and inequality. 2. Enforce the use of advanced bias mitigation strategies throughout the entire development pipeline (pre-processing, in-processing, and post-processing). Specifically, employ fair representation learning and fairness-aware algorithms to interrupt the cycle of perpetuating historical data biases, thereby preventing the entrenchment and acceleration of systemic harm. 3. Implement continuous, real-time monitoring of deployed algorithmic systems for indicators of "bias drift" and unintended societal consequences (e.g., resource distribution inequities or cultural harms). Auditing must include human-in-the-loop oversight and feedback mechanisms that incorporate perspectives from marginalized or disproportionately affected communities to ensure the system's long-term alignment with social equity goals.