Back to the MIT repository
4. Malicious Actors & Misuse2 - Post-deployment

Cheating / plagiarism

Use of generative AI in an academic setting to either cheat or plagiarize

Source: MIT AI Risk Repositorymit1350

ENTITY

1 - Human

INTENT

1 - Intentional

TIMING

2 - Post-deployment

Risk ID

mit1350

Domain lineage

4. Malicious Actors & Misuse

223 mapped risks

4.3 > Fraud, scams, and targeted manipulation

Mitigation strategy

1. Redesign Academic Assessment and Policy: Establish and clearly communicate explicit, assignment-specific policies regarding generative AI use (e.g., prohibited, permitted with attribution, or encouraged), and redesign assessments to prioritize critical thinking, application, or personalized/localized content that is resistant to generic AI-generated responses. 2. Implement a Hybrid Detection and Verification Framework: Utilize AI detection software to flag potentially non-original submissions, while mandating that final determinations of academic misconduct rely on human judgment supported by objective, verifiable evidence, such as the student's inability to verbally defend their work, inconsistencies with prior submissions, or the inclusion of non-existent sources. 3. Promote Ethical AI Literacy and Transparency: Proactively educate students on the ethical implications of generative AI, the principles of academic integrity, and the requirement for proper attribution and citation of any AI assistance, thereby fostering a culture of honesty and responsibility.