Back to the MIT repository
4. Malicious Actors & Misuse2 - Post-deployment

AI is used to scale up production of false and misleading information

At the same time, we are seeing how AI can be used to scale up the production of convincing yet false or misleading information online (e.g. via image, audio, and text synthesis models like BigGAN [6] and GPT-3 [7]).

Source: MIT AI Risk Repositorymit901

ENTITY

1 - Human

INTENT

1 - Intentional

TIMING

2 - Post-deployment

Risk ID

mit901

Domain lineage

4. Malicious Actors & Misuse

223 mapped risks

4.1 > Disinformation, surveillance, and influence at scale

Mitigation strategy

1. Develop and deploy advanced, multilingual AI models for early FIMI (Foreign Information Manipulation and Interference) identification, paired with an early warning system to facilitate rapid, coordinated responses across multiple online platforms. 2. Increase platform transparency and accountability by establishing clear policies and processes to discover, disrupt, and report on disinformation campaigns, while mandating the clear and consistent labeling of all AI-generated synthetic media and chatbot interactions. 3. Implement sustained civic education and media literacy programs, including the provision of easy-to-use, AI-powered fact-checking tools and critical thinking resources, to foster digital literacy and cognitive resilience in the public against malign information.