QSTN: A Modular Framework for Robust Questionnaire Inference with Large Language Models

arXiv — cs.CLWednesday, December 10, 2025 at 5:00:00 AM
  • QSTN has been introduced as an open-source Python framework designed to generate responses from questionnaire-style prompts, facilitating in-silico surveys and annotation tasks with large language models (LLMs). The framework allows for robust evaluation of questionnaire presentation and response generation methods, based on an extensive analysis of over 40 million survey responses.
  • This development is significant as it aims to enhance the reproducibility and reliability of LLM-based research, providing a no-code user interface that enables researchers to conduct experiments without requiring coding skills, thus broadening access to advanced AI tools.
  • The introduction of QSTN reflects a growing trend in AI research towards improving the usability and effectiveness of LLMs in various applications, including qualitative data analysis and prompt optimization. As researchers explore the capabilities and limitations of LLMs, frameworks like QSTN may play a crucial role in addressing challenges related to cross-cultural understanding and decision-making processes within these models.
— via World Pulse Now AI Editorial System

Was this article worth reading? Share it

Continue Readings
Soft Inductive Bias Approach via Explicit Reasoning Perspectives in Inappropriate Utterance Detection Using Large Language Models
PositiveArtificial Intelligence
A new study has introduced a soft inductive bias approach to enhance inappropriate utterance detection in conversational texts using large language models (LLMs), specifically focusing on Korean corpora. This method aims to define explicit reasoning perspectives to guide inference processes, thereby improving rational decision-making and reducing errors in detecting inappropriate remarks.
Balanced Accuracy: The Right Metric for Evaluating LLM Judges - Explained through Youden's J statistic
NeutralArtificial Intelligence
The evaluation of large language models (LLMs) is increasingly reliant on classifiers, either LLMs or human annotators, to assess desirable or undesirable behaviors. A recent study highlights that traditional metrics like Accuracy and F1 can be misleading due to class imbalances, advocating for the use of Youden's J statistic and Balanced Accuracy as more reliable alternatives for selecting evaluators.
What Triggers my Model? Contrastive Explanations Inform Gender Choices by Translation Models
NeutralArtificial Intelligence
A recent study published on arXiv explores the interpretability of machine translation models, particularly focusing on how gender bias manifests in translation choices. By utilizing contrastive explanations and saliency attribution, the research investigates the influence of context, specifically input tokens, on the gender inflection selected by translation models. This approach aims to uncover the origins of gender bias rather than merely measuring its presence.
When Many-Shot Prompting Fails: An Empirical Study of LLM Code Translation
NeutralArtificial Intelligence
A recent empirical study on Large Language Models (LLMs) has revealed that the effectiveness of many-shot prompting for code translation may be overstated. Analyzing over 90,000 translations, researchers found that while more examples can improve static similarity metrics, functional correctness peaks with fewer examples, indicating a 'many-shot paradox'.
Short-Context Dominance: How Much Local Context Natural Language Actually Needs?
NeutralArtificial Intelligence
The study investigates the short-context dominance hypothesis, suggesting that a small local prefix can often predict the next tokens in sequences. Using large language models, researchers found that 75-80% of sequences from long-context documents only require the last 96 tokens for accurate predictions, leading to the introduction of a new metric called Distributionally Aware MCL (DaMCL) to identify challenging long-context sequences.
A Systematic Evaluation of Preference Aggregation in Federated RLHF for Pluralistic Alignment of LLMs
PositiveArtificial Intelligence
A recent study has introduced a systematic evaluation framework for aligning large language models (LLMs) with diverse human preferences in federated learning environments. This framework assesses the trade-off between alignment quality and fairness using various aggregation strategies for human preferences, including a novel adaptive scheme that adjusts preference weights based on historical performance.
Can AI Truly Represent Your Voice in Deliberations? A Comprehensive Study of Large-Scale Opinion Aggregation with LLMs
NeutralArtificial Intelligence
A comprehensive study has been conducted on the use of large language models (LLMs) for synthesizing public deliberations into neutral summaries. The research highlights the potential of LLMs to generate summaries while also addressing concerns regarding their ability to represent minority perspectives and biases related to input order. The study introduces DeliberationBank, a dataset created from contributions by 3,000 participants, aimed at evaluating LLM performance in summarization tasks.
Chain-of-Image Generation: Toward Monitorable and Controllable Image Generation
PositiveArtificial Intelligence
The Chain-of-Image Generation (CoIG) framework has been introduced to enhance the transparency and control of image generation models, which have traditionally operated as opaque systems. By framing image generation as a sequential, semantic process, CoIG allows for a more interpretable workflow akin to human artistic creation, utilizing large language models (LLMs) to break down complex prompts into manageable instructions.