Back to the MIT repository
4. Malicious Actors & Misuse2 - Post-deployment

Scams

Bad actors can also use generative AI tools to produce adaptable content designed to support a campaign, political agenda, or hateful position and spread that information quickly and inexpensively across many platforms. This rapid spread of false or misleading content—AI-facilitated disinformation—can also create a cyclical effect for generative AI: when a high volume of disinformation is pumped into the digital ecosystem and more generative systems are trained on that information via reinforcement learning methods, for example, false or misleading inputs can create increasingly incorrect outputs.

Source: MIT AI Risk Repositorymit511

ENTITY

1 - Human

INTENT

1 - Intentional

TIMING

2 - Post-deployment

Risk ID

mit511

Domain lineage

4. Malicious Actors & Misuse

223 mapped risks

4.3 > Fraud, scams, and targeted manipulation

Mitigation strategy

1. Implement robust, standardized content provenance mechanisms, such as mandatory watermarking and metadata, to clearly identify AI-generated material, alongside requiring developer transparency regarding model training data sources to mitigate cyclical disinformation corruption. 2. Develop advanced, multilingual AI-driven detection and attribution models that incorporate Explainable AI (XAI) to provide clarity on flagging decisions, thereby facilitating human-in-the-loop verification and rapid collaborative fact-checking workflows. 3. Establish a multi-stakeholder governance framework grounded in the principles of fairness, accountability, and security to mandate regular bias audits of AI systems and launch sustained public education campaigns promoting digital literacy and cognitive resilience against targeted manipulation.