Back to the MIT repository
6. Socioeconomic and Environmental2 - Post-deployment

Social AI Risks

Social AI risks particularly refer to loss of jobs (technological unemployment) due to increasing automation, reflected in a growing resistance by employees towards the integration of AI (Thierer et al., 2017; Winfield & Jirotka, 2018). In addition, the increasing integration of AI systems into all spheres of life poses a growing threat to privacy and to the security of individuals and society as a whole (Winfield & Jirotka, 2018; Wirtz et al., 2019).

Source: MIT AI Risk Repositorymit303

ENTITY

1 - Human

INTENT

3 - Other

TIMING

2 - Post-deployment

Risk ID

mit303

Domain lineage

6. Socioeconomic and Environmental

262 mapped risks

6.2 > Increased inequality and decline in employment quality

Mitigation strategy

1. Prioritize and invest in continuous **reskilling and upskilling programs** to transition the workforce toward roles focused on human-machine collaboration and augmentation. This proactive approach should redefine job descriptions to leverage AI tools for enhanced decision-making and focus on higher-value tasks, thereby mitigating technological unemployment and worker resistance to automation. 2. Establish and enforce a comprehensive **AI data governance and privacy framework** throughout the system's lifecycle. This framework must mandate strict protocols, including data minimization, anonymization, encryption, and role-based access controls, while ensuring transparency to individuals regarding data collection and offering clear mechanisms for opting out. 3. Implement an **AI security and risk management strategy** following a 'Secure-by-Design' principle. Key actions include securing the complete AI supply chain, rigorously assessing model vulnerabilities via adversarial testing, and establishing continuous monitoring of systems for anomalous activity to safeguard the integrity of the AI environment and the security of individuals.