One-Cycle Structured Pruning via Stability-Driven Subnetwork Search

arXiv — cs.LGThursday, December 18, 2025 at 5:00:00 AM
  • A new one-cycle structured pruning framework has been proposed, integrating pre-training, pruning, and fine-tuning into a single training cycle, which aims to enhance efficiency while maintaining accuracy. This method identifies an optimal sub-network early in the training process, utilizing norm-based group saliency criteria and structured sparsity regularization to improve performance.
  • This development is significant as it addresses the high computational costs associated with traditional multi-stage structured pruning methods, potentially making advanced neural network training more accessible and efficient for researchers and practitioners in artificial intelligence.
  • The introduction of this pruning framework aligns with ongoing efforts in the AI community to enhance model performance and robustness, as seen in recent advancements in data augmentation techniques and anomaly detection frameworks. These innovations reflect a broader trend towards optimizing machine learning processes to handle complex tasks with greater efficiency and accuracy.
— via World Pulse Now AI Editorial System

Was this article worth reading? Share it

Recommended apps based on your readingExplore all apps
Continue Readings
Improving Underwater Acoustic Classification Through Learnable Gabor Filter Convolution and Attention Mechanisms
PositiveArtificial Intelligence
A new study has introduced GSE ResNeXt, a deep learning architecture that enhances underwater acoustic target classification by integrating learnable Gabor convolutional layers with a ResNeXt backbone and squeeze-and-excitation attention mechanisms. This innovation addresses the challenges posed by complex underwater noise and limited datasets, improving the model's ability to extract discriminative features.
An Efficient Gradient-Based Inference Attack for Federated Learning
NeutralArtificial Intelligence
A new gradient-based membership inference attack for federated learning has been introduced, leveraging the temporal evolution of last-layer gradients across multiple federated rounds. This method does not require access to private datasets and is designed to address both semi-honest and malicious adversaries, expanding the scope of potential data leaks in federated learning scenarios.
Distillation-Guided Structural Transfer for Continual Learning Beyond Sparse Distributed Memory
PositiveArtificial Intelligence
A new framework called Selective Subnetwork Distillation (SSD) has been proposed to enhance continual learning in sparse neural systems, specifically addressing the limitations of Sparse Distributed Memory Multi-Layer Perceptrons (SDMLP). SSD enables the identification and distillation of knowledge from high-activation neurons without relying on task labels or replay, thus preserving modularity while allowing for structural realignment.
Bits for Privacy: Evaluating Post-Training Quantization via Membership Inference
PositiveArtificial Intelligence
A systematic study has been conducted on the privacy-utility relationship in post-training quantization (PTQ) of deep neural networks, focusing on three algorithms: AdaRound, BRECQ, and OBC. The research reveals that low-precision PTQs, specifically at 4-bit, 2-bit, and 1.58-bit levels, can significantly reduce privacy leakage while maintaining model performance across datasets like CIFAR-10, CIFAR-100, and TinyImageNet.
SoFlow: Solution Flow Models for One-Step Generative Modeling
PositiveArtificial Intelligence
A new framework called Solution Flow Models (SoFlow) has been introduced, enabling one-step generative modeling from scratch. This approach addresses the inefficiencies associated with multi-step denoising processes in diffusion and Flow Matching models by proposing a Flow Matching loss and a solution consistency loss that enhance training performance without requiring complex calculations like the Jacobian-vector product.
REAL: Representation Enhanced Analytic Learning for Exemplar-free Class-incremental Learning
PositiveArtificial Intelligence
A new study presents REAL (Representation Enhanced Analytic Learning), a method designed to improve exemplar-free class-incremental learning (EFCIL) by addressing issues of representation and knowledge utilization in existing analytic continual learning frameworks. REAL employs a dual-stream pretraining approach followed by a representation-enhancing distillation process to create a more effective classifier during class-incremental learning.
Arithmetic-Intensity-Aware Quantization
PositiveArtificial Intelligence
A new framework called Arithmetic-Intensity-Aware Quantization (AIQ) has been introduced to optimize the performance of neural networks by selecting per-layer bit-widths that enhance arithmetic intensity while minimizing accuracy loss. This method has shown a significant increase in throughput and efficiency on models like ResNet-20 and MobileNetV2, outperforming traditional quantization techniques.

Ready to build your own newsroom?

Subscribe to unlock a personalised feed, podcasts, newsletters, and notifications tailored to the topics you actually care about