Back to the MIT repository
4. Malicious Actors & Misuse2 - Post-deployment

Advertising-driven models

AI models and systems underpin the advertising approaches that drive much of the internet, potentially influencing societal behavior.

Source: MIT AI Risk Repositorymit1048

ENTITY

2 - AI

INTENT

1 - Intentional

TIMING

2 - Post-deployment

Risk ID

mit1048

Domain lineage

4. Malicious Actors & Misuse

223 mapped risks

4.1 > Disinformation, surveillance, and influence at scale

Mitigation strategy

1. Mandate Algorithmic Auditing and Fairness Constraints: Systematically integrate fairness metrics and bias detection into the AI model development lifecycle, requiring independent, periodic audits to assess and correct biased outcomes, thereby preventing discriminatory practices in targeting and content delivery. 2. Establish Proactive AI Governance and Transparency Frameworks: Implement a robust AI code of ethics and oversight structures, such as AI ethics boards, alongside requirements for algorithmic transparency (Explainable AI, or XAI). This governance must include implementing emotional data safeguards and requiring psychological impact assessments for advertising technologies that exploit consumer vulnerability. 3. Invest in Media and Digital Literacy Programs: Support long-term, structural reforms to build societal resilience by funding and embedding comprehensive media, digital, and civic literacy education. These programs should equip citizens with the critical thinking skills necessary to identify, scrutinize, and resist AI-driven disinformation and manipulative commercial influence.