$\pi$-Attention: Periodic Sparse Transformers for Efficient Long-Context Modeling

arXiv — cs.CLMonday, November 17, 2025 at 5:00:00 AM
arXiv:2511.10696v1 Announce Type: new Abstract: Transformers have revolutionized natural language processing, but their quadratic complexity with respect to sequence length remains a fundamental bottleneck for long-range modeling. While sparse attention mechanisms like RingAttention reduce computational costs by restricting attention to local neighborhoods, they suffer from limited receptive fields and lack of adaptability. We present \PiAttention, a periodic sparse Transformer that factorizes attention into ring-local neighborhoods, deterministic $\pi$-stride skips, and an adaptive fusion gate. The periodic structure provides predictable coverage of distant tokens, while the sparse footprint keeps the per-layer complexity linear in context length. We prove that \PiAttention achieves $\mathcal{O}(kL + \pi \log L)$ receptive field growth compared to $\mathcal{O}(kL)$ for RingAttention, where $k$ is the local window size, $\pi$ is the skip period, and $L$ is the sequence length. Extensive experiments on language modeling, retrieval, and vision-language tasks demonstrate that \PiAttention matches or surpasses dense attention quality with 8.3\% lower perplexity than RingAttention while using 50\% fewer GPUs for the same context length. Our detailed ablations and visualizations reveal the importance of periodic skips, adaptive fusion, and head-level sparsity coordination for efficient long-context modeling.
— via World Pulse Now AI Editorial System

Was this article worth reading? Share it

Recommended Readings
DeepBlip: Estimating Conditional Average Treatment Effects Over Time
PositiveArtificial Intelligence
DeepBlip is a novel neural framework designed to estimate conditional average treatment effects over time using structural nested mean models (SNMMs). This approach allows for the decomposition of treatment sequences into localized, time-specific 'blip effects', enhancing interpretability and enabling efficient evaluation of treatment policies. DeepBlip integrates sequential neural networks like LSTMs and transformers, addressing the limitations of existing methods by allowing simultaneous learning of all blip functions.
Bayes optimal learning of attention-indexed models
PositiveArtificial Intelligence
The paper introduces the attention-indexed model (AIM), a framework for analyzing learning in deep attention layers. AIM captures the emergence of token-level outputs from bilinear interactions over high-dimensional embeddings. It allows full-width key and query matrices, aligning with practical transformers. The study derives predictions for Bayes-optimal generalization error and identifies phase transitions based on sample complexity, model width, and sequence length, proposing a message passing algorithm and demonstrating optimal performance via gradient descent.
CLAReSNet: When Convolution Meets Latent Attention for Hyperspectral Image Classification
PositiveArtificial Intelligence
CLAReSNet, a new hybrid architecture for hyperspectral image classification, integrates multi-scale convolutional extraction with transformer-style attention through an adaptive latent bottleneck. This model addresses challenges such as high spectral dimensionality, complex spectral-spatial correlations, and limited training samples with severe class imbalance. By combining convolutional networks and transformers, CLAReSNet aims to enhance classification accuracy and efficiency in hyperspectral imaging applications.
Transformers know more than they can tell -- Learning the Collatz sequence
NeutralArtificial Intelligence
The study investigates the ability of transformer models to predict long steps in the Collatz sequence, a complex arithmetic function that maps odd integers to their successors. The accuracy of the models varies significantly depending on the base used for encoding, achieving up to 99.7% accuracy for bases 24 and 32, while dropping to 37% and 25% for bases 11 and 3. Despite these variations, all models exhibit a common learning pattern, accurately predicting inputs with similar residuals modulo 2^p.
Convergence Bound and Critical Batch Size of Muon Optimizer
PositiveArtificial Intelligence
The paper titled 'Convergence Bound and Critical Batch Size of Muon Optimizer' presents a theoretical analysis of the Muon optimizer, which has shown strong empirical performance and is proposed as a successor to AdamW. The study provides convergence proofs for Muon across four practical settings, examining its behavior with and without Nesterov momentum and weight decay. It highlights that the inclusion of weight decay results in tighter theoretical bounds and identifies the critical batch size that minimizes training costs, validated through experiments in image classification and language modeling.
RiverScope: High-Resolution River Masking Dataset
PositiveArtificial Intelligence
RiverScope is a newly developed high-resolution dataset aimed at improving the monitoring of rivers and surface water dynamics, which are crucial for understanding Earth's climate system. The dataset includes 1,145 high-resolution images covering 2,577 square kilometers, with expert-labeled river and surface water masks. This initiative addresses the challenges of monitoring narrow or sediment-rich rivers that are often inadequately represented in low-resolution satellite data.
Multistability of Self-Attention Dynamics in Transformers
NeutralArtificial Intelligence
The paper titled 'Multistability of Self-Attention Dynamics in Transformers' explores a continuous-time multiagent model of self-attention mechanisms in transformers. It establishes a connection between self-attention dynamics and a multiagent version of the Oja flow, which computes the principal eigenvector of a matrix related to the value matrix in transformers. The study classifies the equilibria of the single-head self-attention system into four categories: consensus, bipartite consensus, clustering, and polygonal equilibria, noting that multiple stable equilibria can coexist.