Back to the MIT repository
4. Malicious Actors & Misuse2 - Post-deployment

Persuasive AIs

The deliberate propagation of disinformation is already a serious issue, reducing our shared understanding of reality and polarizing opinions. AIs could be used to severely exacerbate this problem by generating personalized disinformation on a larger scale than before. Additionally, as AIs become better at predicting and nudging our behavior, they will become more capable at manipulating us

Source: MIT AI Risk Repositorymit343

ENTITY

2 - AI

INTENT

3 - Other

TIMING

2 - Post-deployment

Risk ID

mit343

Domain lineage

4. Malicious Actors & Misuse

223 mapped risks

4.1 > Disinformation, surveillance, and influence at scale

Mitigation strategy

1. Develop robust technical mitigations, including adversarially robust anomaly detection, advanced multilingual content identification models, and network analysis to rapidly inhibit and detect AI-powered disinformation and coordinated inauthentic behavior at scale. 2. Implement comprehensive governance and regulatory frameworks, such as a strict legal liability regime for developers of general-purpose AIs, mandating rigorous safety audits, and enforcing prohibitions against AI systems that deploy subliminal or purposefully manipulative techniques to materially distort human behavior. 3. Promote multi-stakeholder digital literacy and cognitive resilience campaigns, integrating AI and disinformation awareness into educational curricula and launching public awareness programs, to equip individuals with the critical thinking tools necessary to discern and resist machine learning-enabled persuasive content.

ADDITIONAL EVIDENCE

AIs could pollute the information ecosystem with motivated lies