Cyberspace risks (Risks of confusing facts, misleading users, and bypassing authentication)
AI systems and their outputs, if not clearly labeled, can make it difficult for users to discern whether they are interacting with AI and to identify the source of generated content. This can impede users' ability to determine the authenticity of information, leading to misjudgment and misunderstanding. Additionally, AI-generated highly realistic images, audio, and videos may circumvent existing identity verification mechanisms, such as facial recognition and voice recognition, rendering these authentication processes ineffective.
ENTITY
3 - Other
INTENT
1 - Intentional
TIMING
2 - Post-deployment
Risk ID
mit695
Domain lineage
3. Misinformation
3.1 > False or misleading information
Mitigation strategy
1. Implement mandatory, cryptographically secure digital content provenance standards (e.g., C2PA) to embed tamper-evident metadata, digital watermarks, or digital signatures in all AI-generated or substantially AI-modified content. This ensures clear identification and traceability of content origin and modification history, thereby upholding the principle of transparency. 2. Upgrade and fortify all critical identity verification and authentication systems by incorporating advanced anti-spoofing countermeasures, specifically passive or active liveness detection and AI-powered deepfake detection tools, to reliably distinguish between genuine human subjects and synthetic (image, audio, video) impersonation attempts. 3. Institute a tiered, risk-based algorithmic auditing regime focused on functional performance across diverse demographics and linguistic variables to proactively detect and mitigate systemic vulnerabilities that could be exploited for misinformation. Concurrently, launch sustained multi-stakeholder programs for civic digital literacy to cultivate critical thinking and public resilience against sophisticated AI-driven manipulation tactics.