Unawareness of Emotions
when a certain vulnerable group of users asks for supporting information, the answers should be informative but at the same time sympathetic and sensitive to users’ reactions
ENTITY
2 - AI
INTENT
2 - Unintentional
TIMING
2 - Post-deployment
Risk ID
mit503
Domain lineage
7. AI System Safety, Failures, & Limitations
7.3 > Lack of capability or robustness
Mitigation strategy
1. Implement Emotionally-Aware Alignment and Fine-Tuning Apply advanced human alignment methodologies, such as Emotion-aware Reinforcement Learning from Human Feedback (E-RLHF) or specialized strategy planners, to explicitly train the LLM on generating contextually appropriate, empathetic, and sensitive responses that adhere to established psychological or support protocols. This action is prioritized as it directly addresses the system's foundational 'unawareness of emotions' by building emotional competence into the model's output generation. 2. Mandate Adaptive Human Oversight and Escalation Protocols Establish a 'Human-in-the-Loop' governance framework that integrates multi-modal sensing and real-time sentiment analysis to detect high-vulnerability cues or emotional distress in user interactions. Upon detection, the system must automatically escalate the conversation to a trained human specialist, ensuring a professional, accountable, and non-automated safety-net for critical support-seeking users. 3. Deploy Continuous Performance and Bias Monitoring Institute a continuous monitoring and audit system to track the LLM's adherence to the intended emotional and support strategies. This system must identify and report model drift or preference bias (e.g., favoring one strategy over another) that could lead to insensitive or inappropriate support, thereby enabling a rapid feedback cycle for targeted retraining and quality assurance.
ADDITIONAL EVIDENCE
LLMs should be continuously monitored and improved for their emotional awareness