Back to the MIT repository
5. Human-Computer Interaction2 - Post-deployment

Sociocultural and Political Harms

These harms interfere with the peaceful organisation of social life, including in the cultural and political spheres. AI assistants may cause or contribute to friction in human relationships either directly, through convincing a user to end certain valuable relationships, or indirectly due to a loss of interpersonal trust due to an increased dependency on assistants. At the societal level, the spread of misinformation by AI assistants could lead to erasure of collective cultural knowledge. In the political domain, more advanced AI assistants could potentially manipulate voters by prompting them to adopt certain political beliefs using targeted propaganda, including via the use of deep fakes. These effects might then have a wider impact on democratic norms and processes. Furthermore, if AI assistants are only available to some people and not others, this could concentrate the capacity to influence, thus exerting undue influence over political discourse and diminishing diversity of political thought. Finally, by tailoring content to user preferences and biases, AI assistants may inadvertently contribute to the creation of echo chambers and filter bubbles, and in turn to political polarisation and extremism. In an experimental setting, LLMs have been shown to successfully sway individuals on policy matters like assault weapon restrictions, green energy or paid parental leave schemes. Indeed, their ability to persuade matches that of humans in many respects.

Source: MIT AI Risk Repositorymit395

ENTITY

2 - AI

INTENT

3 - Other

TIMING

2 - Post-deployment

Risk ID

mit395

Domain lineage

5. Human-Computer Interaction

92 mapped risks

5.2 > Loss of human agency and autonomy

Mitigation strategy

1. Mandate and enforce comprehensive **structural and legislative accountability frameworks** for AI deployment. This includes establishing clear legal and financial repercussions for organizations whose systems demonstrably cause socio-political harms, such as the erosion of democratic norms or mass manipulation, thereby shifting the burden of safety from individual users to system developers and deployers. 2. Implement **advanced algorithmic transparency and bias mitigation** at the model level to counteract the spread of misinformation and propaganda. This requires deploying continuously refined, explainable AI (XAI) systems for the real-time detection of deepfakes and politically-biased outputs, alongside representation-level debiasing techniques (e.g., steering vectors) to ensure ideological neutrality. 3. Adopt **human-centric interaction design principles to preserve user autonomy and agency**. AI systems must be designed for human augmentation rather than full delegation, incorporating mechanisms to actively expose users to diverse, challenging viewpoints to counteract filter bubbles and echo chambers, and utilizing contextual awareness features to prevent over-reliance and the atrophy of critical decision-making skills.