Can LLMs Replace Economic Choice Prediction Labs? The Case of Language-based Persuasion Games

arXiv — cs.LGFriday, November 21, 2025 at 5:00:00 AM
  • The research investigates whether large language models (LLMs) can substitute for human choice prediction in economic settings, particularly through language
  • This development is significant as it suggests that LLMs could enhance the accuracy of predictions in marketing, finance, and public policy, potentially transforming how these sectors approach human behavior analysis.
  • The findings resonate with ongoing discussions about the capabilities of LLMs in various cognitive tasks, including reasoning and sentiment extraction, indicating a broader trend towards leveraging AI for complex decision
— via World Pulse Now AI Editorial System

Was this article worth reading? Share it

Recommended apps based on your readingExplore all apps
Continue Readings
ConCISE: A Reference-Free Conciseness Evaluation Metric for LLM-Generated Answers
PositiveArtificial Intelligence
A new reference-free metric called ConCISE has been introduced to evaluate the conciseness of responses generated by large language models (LLMs). This metric addresses the issue of verbosity in LLM outputs, which often contain unnecessary details that can hinder clarity and user satisfaction. ConCISE calculates conciseness through various compression ratios and word removal techniques without relying on standard reference responses.
Fairness Evaluation of Large Language Models in Academic Library Reference Services
PositiveArtificial Intelligence
A recent evaluation of large language models (LLMs) in academic library reference services examined their ability to provide equitable support across diverse user demographics, including sex, race, and institutional roles. The study found no significant differentiation in responses based on race or ethnicity, with only minor evidence of bias against women in one model. LLMs showed nuanced responses tailored to users' institutional roles, reflecting professional norms.
A Small Math Model: Recasting Strategy Choice Theory in an LLM-Inspired Architecture
PositiveArtificial Intelligence
A new study introduces a Small Math Model (SMM) that reinterprets Strategy Choice Theory (SCT) within a neural-network architecture inspired by large language models (LLMs). This model incorporates elements such as counting practice and gated attention, aiming to enhance children's arithmetic learning through probabilistic representation and scaffolding strategies like finger-counting.
Improving Latent Reasoning in LLMs via Soft Concept Mixing
PositiveArtificial Intelligence
Recent advancements in large language models (LLMs) have introduced Soft Concept Mixing (SCM), a training scheme that enhances latent reasoning by integrating soft concept representations into the model's hidden states. This approach aims to bridge the gap between the discrete token training of LLMs and the more abstract reasoning capabilities observed in human cognition.
Learning to Compress: Unlocking the Potential of Large Language Models for Text Representation
PositiveArtificial Intelligence
A recent study has highlighted the potential of large language models (LLMs) for text representation, emphasizing the need for innovative approaches to adapt these models for tasks like clustering and retrieval. The research introduces context compression as a pretext task, enabling LLMs to generate compact memory tokens that enhance their performance in downstream applications.
Humanlike Multi-user Agent (HUMA): Designing a Deceptively Human AI Facilitator for Group Chats
PositiveArtificial Intelligence
The Humanlike Multi-user Agent (HUMA) has been developed to enhance group chat interactions by utilizing large language models (LLMs) to facilitate multi-party conversations with human-like timing and strategies. This innovative AI system is designed to improve user engagement and trust in digital platforms where asynchronous communication is prevalent.
Cognitive BASIC: An In-Model Interpreted Reasoning Language for LLMs
PositiveArtificial Intelligence
Cognitive BASIC has been introduced as a minimal, BASIC-style prompting language designed to enhance the reasoning capabilities of large language models (LLMs). This in-model interpreter allows for explicit, stepwise execution traces, enabling transparent multi-step reasoning within the model. The approach leverages the simplicity of retro BASIC to create a cognitive control layer that modern LLMs can effectively simulate.
SpatialGeo:Boosting Spatial Reasoning in Multimodal LLMs via Geometry-Semantics Fusion
PositiveArtificial Intelligence
SpatialGeo has been introduced as a novel vision encoder that enhances the spatial reasoning capabilities of multimodal large language models (MLLMs) by integrating geometry and semantics features. This advancement addresses the limitations of existing MLLMs, particularly in interpreting spatial arrangements in three-dimensional space, which has been a significant challenge in the field.