Back to the MIT repository
4. Malicious Actors & Misuse2 - Post-deployment

Ability to persuade

AI could be used to develop sophisticated tools to manipulate and persuade individuals.

Source: MIT AI Risk Repositorymit1047

ENTITY

1 - Human

INTENT

1 - Intentional

TIMING

2 - Post-deployment

Risk ID

mit1047

Domain lineage

4. Malicious Actors & Misuse

223 mapped risks

4.1 > Disinformation, surveillance, and influence at scale

Mitigation strategy

1. Mandate the application of rigorous model assurance protocols, including the exclusive use of high-quality, verified training datasets, the systematic application of adversarial testing, and continuous post-deployment performance evaluation to ensure fidelity and limit the generation of deceptive content. 2. Develop and implement a formal, comprehensive AI governance framework that establishes clear accountability mechanisms, integrates continuous risk assessments across the AI lifecycle, and ensures alignment with established standards (e.g., NIST AI RMF, ISO/IEC 42001) to control the responsible deployment of persuasive AI capabilities. 3. Invest strategically in both advanced counter-disinformation technology, such as deepfake detection and automated narrative analysis, and in public AI literacy programs to enhance the capacity of users and employees to critically verify information veracity and resist sophisticated manipulation attempts.