Coercion / manipulation
Use of a technology system to covertly alter user beliefs and behaviour using nudging, dark patterns and/or other opaque techniques
ENTITY
1 - Human
INTENT
1 - Intentional
TIMING
2 - Post-deployment
Risk ID
mit1360
Domain lineage
4. Malicious Actors & Misuse
4.1 > Disinformation, surveillance, and influence at scale
Mitigation strategy
1. Establish and Enforce Regulatory Compliance for Deceptive Design: Mandate a comprehensive 'Dark Patterns Mitigation' framework, requiring all digital interfaces—especially those utilizing algorithmic nudging—to undergo mandatory, independent ethical design audits. This framework must explicitly prohibit manipulative tactics (e.g., forced continuity, hidden costs, non-symmetrical choice architecture) to ensure that all user consent obtained is freely given, informed, and unambiguous, aligning with principles established in global privacy and digital services regulations. 2. Implement Transparency and User Autonomy Controls: Enforce 'Transparency by Design' by requiring systems to provide clear, plain-language, and non-default privacy-protective settings and 'Easy Exit' mechanisms. Design user flows to offer symmetrical choices, ensuring that the path to a privacy-protective or opt-out decision is not intentionally more difficult, time-consuming, or visually obscured than the path to a company-benefiting outcome, thereby directly countering covert behavioral alteration. 3. Develop and Deploy Continuous, AI-Assisted Ethical Monitoring: Institute proactive, real-time monitoring systems capable of detecting both traditional and emergent algorithmic manipulative techniques, such as dynamic nudging and personalized confirmshaming. This necessitates ongoing data governance to track and report behavioral outcomes influenced by opaque system choices, coupled with mandatory, recurring ethical training for product and AI development teams to shift key performance indicators (KPIs) from maximum engagement to user welfare and intentional, values-driven behavior.