SPROCKET: Extending ROCKET to Distance-Based Time-Series Transformations With Prototypes

arXiv — cs.LGWednesday, December 10, 2025 at 5:00:00 AM
  • SPROCKET, a new feature engineering strategy based on prototypes, has been introduced to enhance time series classification, extending the capabilities of the existing ROCKET algorithm. Experimental results indicate that SPROCKET achieves performance comparable to leading convolutional algorithms across UCR and UEA Time Series Classification archives.
  • This development is significant as it demonstrates that prototype-based transformations can improve both accuracy and robustness in time series classification, potentially setting a new standard in the field and influencing future research and applications in artificial intelligence.
— via World Pulse Now AI Editorial System

Was this article worth reading? Share it

Recommended apps based on your readingExplore all apps
Continue Readings
PRISM: Lightweight Multivariate Time-Series Classification through Symmetric Multi-Resolution Convolutional Layers
PositiveArtificial Intelligence
PRISM has been introduced as a lightweight fully convolutional classifier for multivariate time series classification, utilizing symmetric multi-resolution convolutional layers to efficiently capture both short-term patterns and longer-range dependencies. This model significantly reduces the number of learnable parameters while maintaining performance across various benchmarks, including human activity recognition and sleep state detection.
The Meta-Learning Gap: Combining Hydra and Quant for Large-Scale Time Series Classification
NeutralArtificial Intelligence
The study explores the trade-off between accuracy and computational efficiency in time series classification, highlighting the limitations of comprehensive ensembles like HIVE-COTE 2.0, which require extensive training time. By combining Hydra and Quant algorithms, the research evaluates performance across ten large-scale MONSTER datasets, achieving a mean accuracy improvement from 0.829 to 0.836 in seven datasets. However, the findings reveal a significant meta-learning optimization gap, with prediction-combination ensembles capturing only 11% of theoretical potential.