Degraded and homogenised information environments
Beyond this, the widespread adoption of advanced AI assistants for content generation could have a number of negative consequences for our shared information ecosystem. One concern is that it could result in a degradation of the quality of the information available online. Researchers have already observed an uptick in the amount of audiovisual misinformation, elaborate scams and fake websites created using generative AI tools (Hanley and Durumeric, 2023). As more and more people turn to AI assistants to autonomously create and disseminate information to public audiences at scale, it may become increasingly difficult to parse and verify reliable information. This could further threaten and complicate the status of journalists, subject-matter experts and public information sources. Over time, a proliferation of spam, misleading or low-quality synthetic content in online spaces could also erode the digital knowledge commons – the shared knowledge resources accessible to everyone on the web, such as publicly accessible data repositories (Huang and Siddarth, 2023). At its extreme, such degradation could also end up skewing people’s view of reality and scientific consensus, make them more doubtful of the credibility of all information they encounter and shape public discourse in unproductive ways. Moreover, in an online environment saturated with AI-generated content, more and more people may become reliant on personalised, highly capable AI assistants for their informational needs. This also runs the risk of homogenising the type of information and ideas people encounter online (Epstein et al., 2023).
ENTITY
1 - Human
INTENT
1 - Intentional
TIMING
2 - Post-deployment
Risk ID
mit431
Domain lineage
3. Misinformation
3.2 > Pollution of information ecosystem and loss of consensus reality
Mitigation strategy
1. Mandate robust data governance protocols for AI model training and deployment, including rigorous data quality assurance, bias auditing, and the implementation of Retrieval Augmented Generation (RAG) architectures to anchor outputs to verified, external knowledge sources, thereby mitigating hallucinations and improving output fidelity. 2. Develop and enforce standardized transparency and provenance mechanisms, such as watermarking and metadata attribution, to reliably distinguish AI-generated content from human-created content, enabling users and platforms to assess content credibility and preserve the integrity of the digital knowledge commons. 3. Establish systematic, holistic sociotechnical evaluations of AI assistant deployments, prioritizing multi-agent and societal-level research, alongside implementing mandatory human-in-the-loop control points for critical information dissemination, to ensure appropriate human judgement and counterbalance over-reliance on personalized AI systems.