Back to the MIT repository
4. Malicious Actors & Misuse2 - Post-deployment

Cognitive risks (Risks of usage in launching cognitive warfare)

AI can be used to make and spread fake news, images, audio, and videos; propagate content of terrorism, extremism, and organized crimes; interfere in the internal affairs of other countries, social systems, and social order; and jeopardize the sovereignty of other countries.

Source: MIT AI Risk Repositorymit703

ENTITY

1 - Human

INTENT

1 - Intentional

TIMING

2 - Post-deployment

Risk ID

mit703

Domain lineage

4. Malicious Actors & Misuse

223 mapped risks

4.1 > Disinformation, surveillance, and influence at scale

Mitigation strategy

1. Prioritize the development and rapid deployment of proactive AI-enabled countermeasures and defense systems. This involves establishing continuous monitoring for anomalous activity, building advanced capabilities for anomaly detection in influence networks, and creating autonomous systems to generate real-time fact-checks, counternarratives, or automatically remove problematic content from the information ecology. 2. Instantiate comprehensive neuro-AI readiness and societal resilience programs. This necessitates viewing cognition as a mission-critical substrate by developing robust training and protective measures against manipulation, and fundamentally expanding media and information literacy education for the general public to cultivate critical thinking skills and fortify resilience against narrative exploitation and disinformation. 3. Establish and enforce robust ethical-legal and governance frameworks for dual-use neuroS/T and AI. This requires the operationalization of "responsible use" approaches and the expansion of governance structures to clearly define accountability, manage risks, and set strict proportionality guidelines for any AI-enabled intervention intended to counter cognitive warfare, ensuring alignment with fundamental rights and democratic processes.