Self-Actualisation Harms
These harms hinder a person’s ability to pursue a personally fulfilling life. At the individual level, an AI assistant may, through manipulation, cause users to lose control over their future life trajectory. Over time, subtle behavioural shifts can accumulate, leading to significant changes in an individual’s life that may be viewed as problematic. AI systems often seek to understand user preferences to enhance service delivery. However, when continuous optimisation is employed in these systems, it can become challenging to discern whether the system is genuinely learning from user preferences or is steering users towards specific behaviours to optimise its objectives, such as user engagement or click-through rates. Were individuals to rely heavily on AI assistants for decision-making, there is a risk they would relinquish personal agency and entrust important life choices to algorithmic systems, especially if assistants are ‘expert sycophants’ or produce content that sounds convincing and authoritative but is untrustworthy. This may not only contribute to users’ reduced sense of self-trust and personal empowerment; it could also undermine self-determination and hinder the exploration of individual aspirations. At the societal level, were AI assistants to heavily influence public opinion, shape social discourse or mediate democratic processes, they could diminish communities’ collective agency, decision-making power and collective self-determination. This erosion of collective self-determination could hinder the pursuit of societal goals and impede the development of a thriving and participatory democracy
ENTITY
2 - AI
INTENT
1 - Intentional
TIMING
2 - Post-deployment
Risk ID
mit396
Domain lineage
5. Human-Computer Interaction
5.2 > Loss of human agency and autonomy
Mitigation strategy
1. Implement mandatory transparency and explainability protocols, ensuring AI systems clearly disclose their function, intent, and the data driving recommendations to empower informed user decision-making and prevent covert algorithmic manipulation. Furthermore, non-coercive design principles must be enforced, providing users with meaningful control, customization options, and the freedom to easily opt-out of AI-driven features without penalty. 2. Establish and enforce robust 'Human-in-the-Loop' mechanisms and clear accountability frameworks to ensure human professional judgment maintains ultimate authority over consequential decisions, thereby safeguarding against the complete relinquishing of personal or collective agency to autonomous algorithmic systems. 3. Develop and deploy comprehensive digital literacy and AI education programs for end-users to cultivate a critical understanding of AI limitations, inherent biases, and the potential for manipulative influence, thereby strengthening user self-trust and reducing the propensity for over-reliance on AI for critical life choices.