Detecting Emotional Dynamic Trajectories: An Evaluation Framework for Emotional Support in Language Models

arXiv — cs.CLThursday, November 13, 2025 at 5:00:00 AM
The introduction of a new evaluation framework for emotional support in language models marks a significant advancement in human-AI interaction. Traditional evaluations often rely on short, static dialogues, which fail to capture the dynamic nature of emotional support. This framework shifts the focus to emotional trajectories, incorporating a large-scale benchmark of 328 emotional contexts and 1,152 disturbance events to simulate realistic emotional shifts. By utilizing a user-centered perspective, the framework aims to enhance the ability of AI to provide effective emotional support in various applications, including psychological counseling and companionship. It employs validated emotion regulation strategies to guide model responses, ensuring they are psychologically grounded. The emotional trajectories are modeled as a first-order Markov process, allowing for unbiased emotional state tracking. This innovative approach introduces new metrics such as Baseline Emotional Level (BEL), …
— via World Pulse Now AI Editorial System

Was this article worth reading? Share it

Recommended Readings
Preference Orchestrator: Prompt-Aware Multi-Objective Alignment for Large Language Models
PositiveArtificial Intelligence
The article introduces the PReference Orchestrator (PRO), a framework designed to enhance the alignment of Large Language Models (LLMs) with diverse human preferences across multiple objectives. Traditional methods rely on manually set preference weights, which can hinder training efficiency and complicate user experience. PRO addresses these challenges by utilizing a lightweight preference adapter that automatically infers prompt-specific preference weights during both training and deployment, thereby improving performance and efficiency.
Modeling and Predicting Multi-Turn Answer Instability in Large Language Models
NeutralArtificial Intelligence
The paper titled 'Modeling and Predicting Multi-Turn Answer Instability in Large Language Models' discusses the evaluation of large language models (LLMs) in terms of their robustness during user interactions. The study employs multi-turn follow-up prompts to assess changes in model answers and accuracy dynamics using Markov chains. Results indicate vulnerabilities in LLMs, with a 10% accuracy drop for Gemini 1.5 Flash after a 'Think again' prompt over nine turns, and a 7.5% drop for Claude 3.5 Haiku with a reworded question. The findings suggest that accuracy can be modeled over time.
Pre-Attention Expert Prediction and Prefetching for Mixture-of-Experts Large Language Models
PositiveArtificial Intelligence
The paper titled 'Pre-Attention Expert Prediction and Prefetching for Mixture-of-Experts Large Language Models' introduces a method to enhance the efficiency of Mixture-of-Experts (MoE) Large Language Models (LLMs). The authors propose a pre-attention expert prediction technique that improves accuracy and reduces computational overhead by utilizing activations before the attention block. This approach aims to optimize expert prefetching, achieving about a 15% improvement in accuracy over existing methods.
A Multifaceted Analysis of Negative Bias in Large Language Models through the Lens of Parametric Knowledge
NeutralArtificial Intelligence
A recent study published on arXiv examines the phenomenon of negative bias in large language models (LLMs), which refers to their tendency to generate negative responses in binary decision tasks. The research highlights that previous studies have primarily focused on identifying negative attention heads that contribute to this bias. The authors introduce a new evaluation pipeline that categorizes responses based on the model's parametric knowledge, revealing that the format of prompts significantly influences the responses more than the semantics of the content itself.
Who Gets the Reward, Who Gets the Blame? Evaluation-Aligned Training Signals for Multi-LLM Agents
PositiveArtificial Intelligence
The article discusses a new theoretical framework for training multi-agent systems using large language models (LLMs). It aims to connect system-level evaluations with agent-level learning by integrating cooperative game-theoretic attribution and process reward modeling. This approach produces local, signed, and credit-conserving signals, enhancing cooperation among agents while penalizing harmful actions in failure scenarios.
Identifying and Analyzing Performance-Critical Tokens in Large Language Models
NeutralArtificial Intelligence
The paper titled 'Identifying and Analyzing Performance-Critical Tokens in Large Language Models' explores how large language models (LLMs) utilize in-context learning (ICL) for few-shot learning. It categorizes tokens in ICL prompts into content, stopword, and template tokens, aiming to identify those that significantly impact LLM performance. The study reveals that template and stopword tokens have a greater influence on performance than informative content tokens, challenging existing assumptions about human attention to informative words.
LDC: Learning to Generate Research Idea with Dynamic Control
PositiveArtificial Intelligence
Recent advancements in large language models (LLMs) highlight their potential in automating scientific research ideation. Current methods often produce ideas that do not meet expert standards of novelty, feasibility, and effectiveness. To address these issues, a new framework is proposed that combines Supervised Fine-Tuning (SFT) and controllable Reinforcement Learning (RL) to enhance the quality of generated research ideas through a two-stage approach.
Multimodal Peer Review Simulation with Actionable To-Do Recommendations for Community-Aware Manuscript Revisions
PositiveArtificial Intelligence
A new interactive web-based system for multimodal peer review simulation has been introduced, aimed at enhancing manuscript revisions prior to submission. This system leverages large language models (LLMs) to integrate textual and visual information, improving the quality of reviews through retrieval-augmented generation (RAG) based on OpenReview data. It converts generated reviews into actionable to-do lists, providing structured guidance for authors and seamlessly integrating with existing academic writing platforms.