UrbanAI 2025 Challenge: Linear vs Transformer Models for Long-Horizon Exogenous Temperature Forecasting

arXiv — cs.LGFriday, December 12, 2025 at 5:00:00 AM
  • The UrbanAI 2025 Challenge has revealed significant findings in long-horizon exogenous temperature forecasting, comparing linear models such as Linear, NLinear, and DLinear against Transformer-family models including Informer and Autoformer. The study indicates that linear models consistently outperform their more complex counterparts, with DLinear achieving the highest accuracy across all evaluation splits.
  • This development underscores the effectiveness of linear models in time series forecasting, particularly in challenging scenarios where only past temperature values are available for predictions. The results suggest that simpler models may still hold significant predictive power, challenging the prevailing trend towards more complex architectures.
  • The findings resonate with ongoing discussions in the field of artificial intelligence regarding the balance between model complexity and performance. While Transformer-based models have gained popularity for their versatility, the success of linear models in this context raises questions about the necessity of complex architectures, especially in specific applications like temperature forecasting, where simpler approaches may yield better results.
— via World Pulse Now AI Editorial System

Was this article worth reading? Share it

Recommended apps based on your readingExplore all apps
Continue Readings
SparseSwaps: Tractable LLM Pruning Mask Refinement at Scale
PositiveArtificial Intelligence
SparseSwaps introduces a scalable method for refining pruning masks in large language models (LLMs), addressing the computational challenges associated with traditional pruning techniques that often lead to performance degradation. This approach enhances the efficiency of LLMs by optimizing the selection of pruning masks without the need for full retraining, which is typically resource-intensive.
Sliding Window Attention Adaptation
NeutralArtificial Intelligence
The recent study introduces Sliding Window Attention Adaptation (SWAA) to address the inefficiencies of long-context inference in Transformer-based Large Language Models (LLMs). By adapting models pretrained with full attention to utilize sliding window attention, the research proposes a combination of methods to enhance performance without the need for additional pretraining.

Ready to build your own newsroom?

Subscribe to unlock a personalised feed, podcasts, newsletters, and notifications tailored to the topics you actually care about