Back to the MIT repository
4. Malicious Actors & Misuse2 - Post-deployment

Systemic large-scale manipulation

AI systems embedded with systemic biases can manipulate large population segments, particularly when these biases align with the beliefs or behaviors of the targeted group. When weaponized at scale, this manipulation can exacerbate social divisions or cause large-scale disruptions, such as city-wide blackouts (e.g., by the manipulation of power consumption into the peak demand period [159]).

Source: MIT AI Risk Repositorymit1183

ENTITY

1 - Human

INTENT

1 - Intentional

TIMING

2 - Post-deployment

Risk ID

mit1183

Domain lineage

4. Malicious Actors & Misuse

223 mapped risks

4.2 > Cyberattacks, weapon development or use, and mass harm

Mitigation strategy

- Mandate Independent Pre-Deployment Safety Assessments and Audits, with specific adversarial testing focused on the model's potential for harmful manipulation and the amplification of systemic biases, as a prerequisite for deployment. - Implement Continuous Post-Deployment Monitoring, including robust input/output filtering mechanisms, to detect and neutralize at-scale manipulative content generation, track bias drift, and report serious incidents related to social division or critical infrastructure disruption. - Establish Formal Risk-Focused Governance Structures, such as board risk committees and ethics boards, and enforce the use of diverse, representative datasets in model development to proactively mitigate the embedding of systemic biases that enable large-scale manipulation.