MAQuA: Adaptive Question-Asking for Multidimensional Mental Health Screening using Item Response Theory

arXiv — cs.CLFriday, November 21, 2025 at 5:00:00 AM
  • MAQuA introduces a novel approach to mental health screening by utilizing adaptive questioning to improve diagnostic accuracy and reduce user burden. This framework combines item response theory with factor analysis to select the most informative questions across various mental health dimensions.
  • The development of MAQuA is significant as it addresses the inefficiencies of traditional mental health assessments, potentially transforming how mental health conditions like depression and anxiety are diagnosed and monitored.
  • This advancement reflects a broader trend in the integration of large language models in healthcare, emphasizing the need for efficient, scalable solutions in mental health services amidst increasing demand and limited resources.
— via World Pulse Now AI Editorial System

Was this article worth reading? Share it

Continue Readings
Large language models and research progress: Q&A with an aerospace engineer
NeutralArtificial Intelligence
The rapid expansion of large language models' (LLMs) capabilities—including web search, code execution, data analysis, and hypothesis generation—is outpacing critical reflection on their role in academic research. This raises questions about the implications of LLMs in various fields and the need for a more structured approach to their integration into research methodologies.
One Pic is All it Takes: Poisoning Visual Document Retrieval Augmented Generation with a Single Image
NegativeArtificial Intelligence
The paper discusses the vulnerabilities of visual document retrieval-augmented generation (VD-RAG) systems to poisoning attacks. By injecting a single adversarial image into the knowledge base, attackers can disrupt both retrieval and generation processes. This highlights the risks associated with integrating visual modalities into retrieval-augmented systems, which are designed to enhance the accuracy of large language models.
Unsupervised Discovery of Long-Term Spatiotemporal Periodic Workflows in Human Activities
PositiveArtificial Intelligence
The study presents a benchmark of 580 multimodal human activity sequences that focus on long-term periodic workflows in human activities, which are often overlooked. It introduces evaluation tasks for unsupervised periodic workflow detection, task completion tracking, and procedural anomaly detection. A lightweight, training-free baseline model is proposed to address the challenges of detecting diverse periodic workflows, particularly in low-contrast patterns.
Liars' Bench: Evaluating Lie Detectors for Language Models
NeutralArtificial Intelligence
The article introduces LIARS' BENCH, a comprehensive testbed designed to evaluate lie detection techniques in large language models (LLMs). It consists of 72,863 examples of lies and honest responses generated by four open-weight models across seven datasets. The study reveals that existing lie detection methods often fail to identify certain types of lies, particularly when the model's deception cannot be discerned from the transcript alone, highlighting limitations in current techniques.
Physics-Based Benchmarking Metrics for Multimodal Synthetic Images
NeutralArtificial Intelligence
The paper presents a new metric called Physics-Constrained Multimodal Data Evaluation (PCMDE) aimed at improving the evaluation of multimodal synthetic images. Current metrics like BLEU and CIDEr often fail to accurately assess semantic and structural accuracy, particularly in specific domains. PCMDE integrates large language models with reasoning and vision-language models to enhance feature extraction, validation, and physics-guided reasoning.
LiveCLKTBench: Towards Reliable Evaluation of Cross-Lingual Knowledge Transfer in Multilingual LLMs
PositiveArtificial Intelligence
LiveCLKTBench is an automated generation pipeline designed to evaluate cross-lingual knowledge transfer in large language models (LLMs). It isolates and measures knowledge transfer by identifying time-sensitive knowledge entities, filtering them based on temporal occurrence, and generating factual questions translated into multiple languages. The evaluation of several LLMs across five languages reveals that cross-lingual transfer is influenced by linguistic distance and is often asymmetric.
Bias after Prompting: Persistent Discrimination in Large Language Models
NegativeArtificial Intelligence
A recent study challenges the assumption that biases do not transfer from pre-trained large language models (LLMs) to adapted models. It reveals that biases can persist through prompt adaptations, with strong correlations observed across demographics such as gender, age, and religion. The findings indicate that popular mitigation methods may not effectively prevent bias transfer, raising concerns about the reliability of LLMs in real-world applications.
Hierarchical Token Prepending: Enhancing Information Flow in Decoder-based LLM Embeddings
PositiveArtificial Intelligence
Hierarchical Token Prepending (HTP) is a proposed method aimed at enhancing information flow in decoder-based large language model (LLM) embeddings. Traditional models face limitations due to their causal attention mechanism, which restricts backward information flow. HTP introduces block-level summary tokens to improve representation quality, achieving performance gains across various datasets, particularly in long-context scenarios.