Back to the MIT repository
3. Misinformation2 - Post-deployment

AI contributes to increased online polarisation

One of the most significant commercial uses of current AI systems is in the content recommendation algorithms of social media companies, and there are already concerns that this is contributing to worsened polarisation online

Source: MIT AI Risk Repositorymit900

ENTITY

1 - Human

INTENT

2 - Unintentional

TIMING

2 - Post-deployment

Risk ID

mit900

Domain lineage

3. Misinformation

74 mapped risks

3.2 > Pollution of information ecosystem and loss of consensus reality

Mitigation strategy

1. **Implement Algorithmic Content Reranking:** Integrate and deploy machine learning models, such as large language models, to actively detect and downrank content that expresses anti-democratic attitudes or extreme partisan animosity within user feeds. This intervention prioritizes the reduction of affective polarization by minimizing exposure to incendiary rhetoric, rather than solely optimizing for user engagement metrics. 2. **Mandate Systemic Risk Assessment and Governance:** Establish and enforce comprehensive regulatory frameworks, analogous to the Digital Services Act, that compel Very Large Online Platforms (VLOPs) to conduct rigorous, recurring risk assessments of their AI-driven recommendation systems. These assessments must specifically analyze and publicly report on the systemic risks of content amplification contributing to political polarization, requiring the implementation of effective, auditable mitigation measures. 3. **Enhance User Control and Algorithmic Choice:** Develop and offer users transparent, non-default options for feed curation that bypass engagement-maximizing recommendation algorithms. Such options should include chronological sorting or user-configurable parameters that allow for the promotion of content diversity and the active suppression of ideologically polarizing material.