Augur: Modeling Covariate Causal Associations in Time Series via Large Language Models

arXiv — cs.LGThursday, November 27, 2025 at 5:00:00 AM
  • Augur has introduced a novel framework for time series forecasting that leverages large language models (LLMs) to identify and utilize directed causal associations among covariates. This two-stage architecture involves a teacher LLM that infers a causal graph and a student agent that refines this graph for improved forecasting accuracy.
  • The development of Augur is significant as it addresses limitations in existing LLM-based forecasting methods, such as reliance on coarse statistical prompts and lack of interpretability, thereby enhancing predictive capabilities in various applications.
  • This advancement in LLMs reflects a broader trend in AI where models are increasingly being designed to integrate complex data types and improve decision-making processes, paralleling efforts in other domains like game theory and materials science, where LLMs are also being utilized to enhance predictive accuracy and operational efficiency.
— via World Pulse Now AI Editorial System

Was this article worth reading? Share it

Recommended apps based on your readingExplore all apps
Continue Readings
CaptionQA: Is Your Caption as Useful as the Image Itself?
PositiveArtificial Intelligence
A new benchmark called CaptionQA has been introduced to evaluate the utility of model-generated captions in supporting downstream tasks across various domains, including Natural, Document, E-commerce, and Embodied AI. This benchmark consists of 33,027 annotated multiple-choice questions that require visual information to answer, aiming to assess whether captions can effectively replace images in multimodal systems.
Inferix: A Block-Diffusion based Next-Generation Inference Engine for World Simulation
PositiveArtificial Intelligence
Inferix has been introduced as a next-generation inference engine that utilizes a block-diffusion decoding paradigm, merging diffusion and autoregressive methods to enhance video generation capabilities. This innovation aims to create long, interactive, and high-quality videos, which are essential for applications in agentic AI, embodied AI, and gaming.
MUSE: Manipulating Unified Framework for Synthesizing Emotions in Images via Test-Time Optimization
PositiveArtificial Intelligence
MUSE, a new framework for emotional synthesis in images, has been introduced, addressing inefficiencies in current Image Emotional Synthesis (IES) methods by integrating emotional generation and editing tasks. This approach leverages Test-Time Scaling, allowing for stable synthesis guidance without the need for additional model updates or specialized datasets.
Multi-Reward GRPO for Stable and Prosodic Single-Codebook TTS LLMs at Scale
PositiveArtificial Intelligence
Recent advancements in Large Language Models (LLMs) have led to the development of a multi-reward Group Relative Policy Optimization (GRPO) framework aimed at enhancing the stability and prosody of single-codebook text-to-speech (TTS) systems. This framework integrates various rule-based rewards to optimize token generation policies, addressing issues such as unstable prosody and speaker drift that have plagued existing models.
A Systematic Analysis of Large Language Models with RAG-enabled Dynamic Prompting for Medical Error Detection and Correction
PositiveArtificial Intelligence
A systematic analysis has been conducted on large language models (LLMs) utilizing retrieval-augmented dynamic prompting (RDP) for the detection and correction of medical errors. The study evaluated various prompting strategies, including zero-shot and static prompting, using the MEDEC dataset and nine instruction-tuned LLMs, revealing performance metrics such as accuracy and recall in error processing tasks.
DSD: A Distributed Speculative Decoding Solution for Edge-Cloud Agile Large Model Serving
PositiveArtificial Intelligence
A new distributed speculative decoding framework, DSD, has been introduced to enhance large language model (LLM) inference by reducing decoding latency and improving scalability across edge-cloud environments. DSD-Sim, a discrete-event simulator, has been developed to analyze network dynamics, while an Adaptive Window Control policy optimizes throughput by adjusting speculation window sizes dynamically.
Learning from Risk: LLM-Guided Generation of Safety-Critical Scenarios with Prior Knowledge
PositiveArtificial Intelligence
A new framework has been developed for generating safety-critical scenarios in autonomous driving, utilizing a conditional variational autoencoder (CVAE) and a large language model (LLM). This approach addresses the challenges posed by rare long-tail events and complex multi-agent interactions, which are crucial for safety validation but often underrepresented in real-world data. The integration allows for the creation of realistic and risk-sensitive scenarios.
Subgoal Graph-Augmented Planning for LLM-Guided Open-World Reinforcement Learning
PositiveArtificial Intelligence
A new framework called Subgoal Graph-Augmented Actor-Critic-Refiner (SGA-ACR) has been proposed to enhance the planning capabilities of large language models (LLMs) in reinforcement learning (RL) by integrating environment-specific subgoal graphs and structured entity knowledge. This addresses the misalignment between abstract planning and executable actions in RL environments.