Repetitive Contrastive Learning Enhances Mamba's Selectivity in Time Series Prediction

arXiv — cs.LGThursday, November 13, 2025 at 5:00:00 AM
The introduction of Repetitive Contrastive Learning (RCL) marks a significant advancement in time series forecasting, particularly for Mamba-based models that have previously excelled due to their sequence selection capabilities. However, these models faced challenges with insufficient focus on critical time steps and noise suppression. RCL addresses these issues by pretraining a Mamba block to enhance its selective abilities, which are then transferred to various backbone models. This innovative approach employs sequence augmentation with Gaussian noise and utilizes both inter-sequence and intra-sequence contrastive learning to prioritize information-rich time steps. Extensive experiments have demonstrated that RCL not only improves the temporal prediction performance of these models but also surpasses existing methods, achieving state-of-the-art results. Additionally, two new metrics have been proposed to quantify Mamba's selective capabilities, further solidifying the impact of RCL …
— via World Pulse Now AI Editorial System

Was this article worth reading? Share it

Recommended Readings
MADiff: Motion-Aware Mamba Diffusion Models for Hand Trajectory Prediction on Egocentric Videos
PositiveArtificial Intelligence
The article presents MADiff, a novel method for predicting hand trajectories in egocentric videos using diffusion models. This approach aims to enhance the understanding of human intentions and actions, which is crucial for advancements in embodied artificial intelligence. The challenges of capturing high-level human intentions and the effects of camera egomotion interference are addressed, making this method significant for applications in extended reality and robot manipulation.
OpenUS: A Fully Open-Source Foundation Model for Ultrasound Image Analysis via Self-Adaptive Masked Contrastive Learning
PositiveArtificial Intelligence
OpenUS is a newly proposed open-source foundation model for ultrasound image analysis, addressing the challenges of operator-dependent interpretation and variability in ultrasound imaging. This model utilizes a vision Mamba backbone and introduces a self-adaptive masking framework that enhances pre-training through contrastive learning and masked image modeling. With a dataset comprising 308,000 images from 42 datasets, OpenUS aims to improve the generalizability and efficiency of ultrasound AI models.