Back to the MIT repository
4. Malicious Actors & Misuse2 - Post-deployment

Civic and political harms

Political harms emerge when “people are disenfranchised and deprived of appropriate political power and influence” [186, p. 162]. These harms focus on the domain of government, and focus on how algorithmic systems govern through individualized nudges or micro-directives [187], that may destabilize governance systems, erode human rights, be used as weapons of war [188], and enact surveillant regimes that disproportionately target and harm people of color

Source: MIT AI Risk Repositorymit155

ENTITY

3 - Other

INTENT

1 - Intentional

TIMING

2 - Post-deployment

Risk ID

mit155

Domain lineage

4. Malicious Actors & Misuse

223 mapped risks

4.1 > Disinformation, surveillance, and influence at scale

Mitigation strategy

1. Establish mandatory algorithmic accountability and transparency frameworks, including independent oversight mechanisms (e.g., Algorithmic Transparency Commissions) and human rights impact assessments, to scrutinize automated systems deployed in public functions and civic spaces. 2. Require technology platforms to implement advanced, hybrid technical and human interventions for real-time threat assessment and detection of coordinated computational propaganda, bot networks, and AI-generated disinformation to reduce manipulation at scale. 3. Systematically integrate and scale media, digital, and civic literacy curricula across educational and public sectors to cultivate critical thinking skills and enhance societal resilience against pervasive informational manipulation tactics, such as pre-bunking.

ADDITIONAL EVIDENCE

Bots, automated programs, are used to spread computational propaganda. While bots can be used for legitimate functions ... [they] can be used to spam, harass, silence opponents, 'give the illusion of large-scale consensus', sway votes, defame critics, and spread disinformation campaigns