Enhanced Structured Lasso Pruning with Class-wise Information

arXiv — cs.CVMonday, November 17, 2025 at 5:00:00 AM
- The study presents innovative pruning techniques for neural networks that leverage class-wise information to maintain statistical integrity during model optimization. This approach addresses limitations in existing methods that may compromise performance by neglecting important data relationships. The introduction of sGLP-IB and sTLP-IB is significant as it enhances model efficiency while achieving notable parameter reductions and accuracy retention across various datasets.
— via World Pulse Now AI Editorial System

Was this article worth reading? Share it

Recommended Readings
ERMoE: Eigen-Reparameterized Mixture-of-Experts for Stable Routing and Interpretable Specialization
PositiveArtificial Intelligence
The article introduces ERMoE, a new Mixture-of-Experts (MoE) architecture designed to enhance model capacity by addressing challenges in routing and expert specialization. ERMoE reparameterizes experts in an orthonormal eigenbasis and utilizes an 'Eigenbasis Score' for routing, which stabilizes expert utilization and improves interpretability. This approach aims to overcome issues of misalignment and load imbalances that have hindered previous MoE architectures.
Unleashing the Potential of Large Language Models for Text-to-Image Generation through Autoregressive Representation Alignment
PositiveArtificial Intelligence
The article introduces Autoregressive Representation Alignment (ARRA), a novel training framework designed to enhance text-to-image generation in autoregressive large language models (LLMs) without altering their architecture. ARRA achieves this by aligning the hidden states of LLMs with visual representations from external models through a global visual alignment loss and a hybrid token. Experimental results demonstrate that ARRA significantly reduces the Fréchet Inception Distance (FID) for models like LlamaGen, indicating improved coherence in generated images.
AMUN: Adversarial Machine UNlearning
PositiveArtificial Intelligence
The paper titled 'AMUN: Adversarial Machine UNlearning' discusses a novel method for machine unlearning, which allows users to delete specific datasets to comply with privacy regulations. Traditional exact unlearning methods require significant computational resources, while approximate methods have not achieved satisfactory accuracy. The proposed Adversarial Machine UNlearning (AMUN) technique enhances model performance by fine-tuning on adversarial examples, effectively reducing model confidence on forgotten samples while maintaining accuracy on test datasets.
Orthogonal Soft Pruning for Efficient Class Unlearning
PositiveArtificial Intelligence
The article discusses FedOrtho, a federated unlearning framework designed to enhance data unlearning in federated learning environments. It addresses the challenges of balancing forgetting and retention, particularly in non-IID settings. FedOrtho employs orthogonalized deep convolutional kernels and a one-shot soft pruning mechanism, achieving state-of-the-art performance on datasets like CIFAR-10 and TinyImageNet, with over 98% forgetting quality and 97% retention accuracy.
UHKD: A Unified Framework for Heterogeneous Knowledge Distillation via Frequency-Domain Representations
PositiveArtificial Intelligence
Unified Heterogeneous Knowledge Distillation (UHKD) is a proposed framework that enhances knowledge distillation (KD) by utilizing intermediate features in the frequency domain. This approach addresses the limitations of traditional KD methods, which are primarily designed for homogeneous models and struggle in heterogeneous environments. UHKD aims to improve model compression while maintaining accuracy, making it a significant advancement in the field of artificial intelligence.
On the Necessity of Output Distribution Reweighting for Effective Class Unlearning
PositiveArtificial Intelligence
The paper titled 'On the Necessity of Output Distribution Reweighting for Effective Class Unlearning' identifies a critical flaw in class unlearning evaluations, specifically the neglect of class geometry, which can lead to privacy breaches. It introduces a membership-inference attack via nearest neighbors (MIA-NN) to identify unlearned samples. The authors propose a new fine-tuning objective that adjusts the model's output distribution to mitigate privacy risks, demonstrating that existing unlearning methods are susceptible to MIA-NN across various datasets.
PrivDFS: Private Inference via Distributed Feature Sharing against Data Reconstruction Attacks
PositiveArtificial Intelligence
The paper introduces PrivDFS, a distributed feature-sharing framework designed for input-private inference in image classification. It addresses vulnerabilities in split inference that allow Data Reconstruction Attacks (DRAs) to reconstruct inputs with high fidelity. By fragmenting the intermediate representation and processing these fragments independently across a majority-honest set of servers, PrivDFS limits the reconstruction capability while maintaining accuracy within 1% of non-private methods.
RiverScope: High-Resolution River Masking Dataset
PositiveArtificial Intelligence
RiverScope is a newly developed high-resolution dataset aimed at improving the monitoring of rivers and surface water dynamics, which are crucial for understanding Earth's climate system. The dataset includes 1,145 high-resolution images covering 2,577 square kilometers, with expert-labeled river and surface water masks. This initiative addresses the challenges of monitoring narrow or sediment-rich rivers that are often inadequately represented in low-resolution satellite data.