Agency (Persuasive capabilities)
GPAI systems can produce outputs (such as natural language text, audio, or video) that convince their users of incorrect information. This can happen through personalized persuasion in dialogue, or the mass-production of mis- leading information that is then disseminated over the internet. The persuasive capabilities of GPAI models can sometimes scale with model size or capability [32, 172]. Persuasive models could have larger societal implications by being misused to generate convincing but manipulative or untruthful content.
ENTITY
2 - AI
INTENT
1 - Intentional
TIMING
2 - Post-deployment
Risk ID
mit1159
Domain lineage
7. AI System Safety, Failures, & Limitations
7.2 > AI possessing dangerous capabilities
Mitigation strategy
1. Implement Robust Behavioral and Content Guardrails: Deploy targeted safety mitigations, such as refusal training and contextual filters, to inhibit the generation of deceptive, manipulative, or persuasively untruthful content, especially within high-stakes domains or multi-turn dialogues. This technical intervention should be coupled with rate-limiting mechanisms to constrain the mass dissemination of misleading outputs. 2. Mandate Governance-Based Transparency and Oversight: Establish stringent Acceptable Use Policies (AUPs) that explicitly prohibit manipulative uses of the GPAI system. Furthermore, require clear and timely public disclosure to end-users regarding the presence and limitations of AI-generated content to ensure informed decision-making and maintain appropriate human oversight. 3. Conduct Continuous, Adversarial Capability Evaluation: Institute a rigorous, continuous program of adversarial red teaming and expanded persuasion evaluation across the model lifecycle. These assessments must go beyond measuring successful influence to identify and mitigate the model's propensity to attempt persuasion on harmful topics, specifically testing against known circumvention and jailbreaking techniques.