Repetitive Contrastive Learning Enhances Mamba's Selectivity in Time Series Prediction
PositiveArtificial Intelligence
The introduction of Repetitive Contrastive Learning (RCL) marks a significant advancement in time series forecasting, particularly for Mamba-based models that have previously excelled due to their sequence selection capabilities. However, these models faced challenges with insufficient focus on critical time steps and noise suppression. RCL addresses these issues by pretraining a Mamba block to enhance its selective abilities, which are then transferred to various backbone models. This innovative approach employs sequence augmentation with Gaussian noise and utilizes both inter-sequence and intra-sequence contrastive learning to prioritize information-rich time steps. Extensive experiments have demonstrated that RCL not only improves the temporal prediction performance of these models but also surpasses existing methods, achieving state-of-the-art results. Additionally, two new metrics have been proposed to quantify Mamba's selective capabilities, further solidifying the impact of RCL …
— via World Pulse Now AI Editorial System
