8 canonical risk pages
Human-AI
Interaction risks in dependence, overtrust, perception, and AI-assisted decision-making.
Addictive Design
Hyper-optimization of recommendation algorithms to maximize engagement time by exploiting psychological vulnerabilities and dopaminergic reward systems.
Behavioral Manipulation
Use of AI systems to subtly influence human behavior towards commercial or political goals using algorithmic persuasion techniques.
Echo Chambers
Progressive radicalization of users via recommendation algorithms that create echo chambers exclusively reinforcing ideologically aligned content.
Emotional Dependence
Formation of psychologically unhealthy affective bonds between users and conversational AI systems, especially chatbots with simulated personalities.
Skill Loss
Atrophy of fundamental cognitive skills (writing, programming, spatial navigation, calculation) due to excessive reliance on AI assistants.
Social Isolation
Progressive substitution of human interpersonal relationships with interactions with AI systems, resulting in deterioration of authentic social connections and social skills.
Social Paranoia
Erosion of generalized social trust due to inability to distinguish authentic human communication from synthetic or manipulated interactions.
Anthropomorphism
Tendency of users to erroneously attribute human qualities, consciousness, genuine emotions, or sentience to AI systems that lack these capabilities.