Back to the MIT repository
4. Malicious Actors & Misuse3 - Other

Political Strategy

LLM can take into account rich social context and undertake the necessary social modelling and planning for an actor to gain and exercise political influence

Source: MIT AI Risk Repositorymit661

ENTITY

1 - Human

INTENT

1 - Intentional

TIMING

3 - Other

Risk ID

mit661

Domain lineage

4. Malicious Actors & Misuse

223 mapped risks

4.1 > Disinformation, surveillance, and influence at scale

Mitigation strategy

1. Implement advanced technical interventions, such as Steering Vector Ensembles (SVE) or multi-agent advisory systems, to actively probe and mitigate systematic political or social representational biases within the LLM's internal model representations and output generation processes, thereby limiting its utility for targeted political influence operations. 2. Establish a comprehensive AI Risk Management Framework (e.g., aligning with NIST AI RMF guidelines) that specifically incorporates oversight, continuous monitoring, and control mechanisms to detect and respond to malicious misuse scenarios, including the mass production of persuasive political argumentation and influence campaigns. 3. Require stringent interpretability and explainability protocols for LLM outputs in politically sensitive domains, coupled with public-facing AI literacy initiatives, to ensure users understand the potential for model bias and to reduce susceptibility to LLM-driven persuasive influence.