Back to the MIT repository
4. Malicious Actors & Misuse2 - Post-deployment

Writing - Research

Partly overlapping with the discussion on impacts of generative AI on educational institutions, this topic cluster concerns mostly negative effects of LLMs on writing skills and research manuscript composition. The former pertains to the potential homogenization of writing styles, the erosion of semantic capital, or the stifling of individual expression. The latter is focused on the idea of prohibiting generative models for being used to compose scientific papers, figures, or from being a co-author. Sources express concern about risks for academic integrity, as well as the prospect of polluting the scientific literature by a flood of LLM-generated low-quality manuscripts. As a consequence, there are frequent calls for the development of detectors capable of identifying synthetic texts.

Source: MIT AI Risk Repositorymit84

ENTITY

2 - AI

INTENT

2 - Unintentional

TIMING

2 - Post-deployment

Risk ID

mit84

Domain lineage

4. Malicious Actors & Misuse

223 mapped risks

4.3 > Fraud, scams, and targeted manipulation

Mitigation strategy

1. Mandate transparent disclosure and rigorous citation policies for the use of Large Language Models (LLMs) in research manuscript preparation, specifying the tool, purpose, and extent of LLM assistance. Concurrently, formally prohibit LLMs from being listed as co-authors, as they cannot meet the criteria for intellectual contribution and accountability. 2. Require authors to rigorously verify the factual accuracy, completeness, and originality of all LLM-generated material, ensuring the final submission reflects the human author's authentic analysis, interpretation, and critical thought, thereby preventing the pollution of scientific literature with low-quality or erroneous content. 3. Implement and continuously refine synthetic text detection methodologies, which should include the use of statistical analysis (e.g., perplexity and burstiness checks) and stylometry, to proactively identify and flag submissions that may be substantially or wholly generated by LLMs for additional human oversight and inquiry.