Back to the MIT repository
7. AI System Safety, Failures, & Limitations2 - Post-deployment

Unreliable source attribution

Source attribution is the AI system's ability to describe from what training data it generated a portion or all its output. Since current techniques are based on approximations, these attributions might be incorrect.

Source: MIT AI Risk Repositorymit1314

ENTITY

2 - AI

INTENT

2 - Unintentional

TIMING

2 - Post-deployment

Risk ID

mit1314

Domain lineage

7. AI System Safety, Failures, & Limitations

375 mapped risks

7.4 > Lack of transparency or interpretability

Mitigation strategy

- Utilize watermarking frameworks during content generation to embed immutable, source-specific metadata, ensuring high-fidelity and verifiable data provenance and source attribution - Institute robust AI governance protocols that mandate comprehensive documentation of all training data, including feature lineage, and conduct regular AI audits to ensure adherence to mandated transparency and explainability principles for source materials - Implement system safeguards to ensure compliance with intellectual property and licensing obligations, including the maintenance of Copyright Management Information (CMI) and fulfillment of attribution requirements for training data and open-source components