Back to the MIT repository
3. Misinformation2 - Post-deployment

Erosion of due process

Restrictions to or loss of liberty as a result of use or misuse of a generative AI in a legal process

Source: MIT AI Risk Repositorymit1370

ENTITY

1 - Human

INTENT

3 - Other

TIMING

2 - Post-deployment

Risk ID

mit1370

Domain lineage

3. Misinformation

74 mapped risks

3.1 > False or misleading information

Mitigation strategy

1. Implement a mandatory human-in-the-loop review process, requiring licensed professionals to independently verify and confirm the factual accuracy, legal veracity, and compliance of all AI-generated content (e.g., legal briefs, judicial decisions, or governmental determinations) to satisfy the duties of competence and candor. 2. Mandate the use of interpretable and reliable AI systems in all contexts affecting an individual's rights, liberty, or property, along with mandatory disclosure of the AI system's use, logic, and training data to ensure procedural due process, providing the affected party adequate notice and a meaningful opportunity to contest the decision. 3. Establish a comprehensive AI Governance Framework that includes formal acceptable use policies, rigorous risk-based testing protocols, and mandatory, ongoing training for all users on the limitations of generative AI (specifically hallucinations), ethical obligations, and the necessity of independent source verification.