Compensating Distribution Drifts in Class-incremental Learning of Pre-trained Vision Transformers

arXiv — cs.CVFriday, November 14, 2025 at 5:00:00 AM
The exploration of class-incremental learning (CIL) in pre-trained vision transformers (ViTs) reveals critical challenges, particularly concerning distribution drift during sequential fine-tuning. The introduction of Sequential Learning with Drift Compensation (SLDC) addresses these issues by aligning feature distributions, which is crucial for maintaining classifier performance. This aligns with findings in related works, such as 'FedeCouple,' which emphasizes balancing global generalization and local adaptability in federated learning, and 'Difference Vector Equalization,' which focuses on robust fine-tuning of vision-language models. Both underscore the importance of effective distribution management in machine learning, highlighting a broader trend towards enhancing model adaptability and performance across diverse tasks.
— via World Pulse Now AI Editorial System

Was this article worth reading? Share it

Recommended Readings
Toward Generalized Detection of Synthetic Media: Limitations, Challenges, and the Path to Multimodal Solutions
NeutralArtificial Intelligence
Artificial intelligence (AI) in media has seen rapid advancements over the past decade, particularly with the introduction of Generative Adversarial Networks (GANs) and diffusion models, which have enhanced photorealistic image generation. However, these developments have also led to challenges in distinguishing between real and synthetic content, as evidenced by the rise of deepfakes. Many detection models utilizing deep learning methods like Convolutional Neural Networks (CNNs) and Vision Transformers (ViTs) have been created, but they often struggle with generalization and multimodal data.
From Attention to Frequency: Integration of Vision Transformer and FFT-ReLU for Enhanced Image Deblurring
PositiveArtificial Intelligence
Image deblurring is a crucial aspect of computer vision, focused on restoring sharp images from blurry ones caused by motion or camera shake. Traditional deep learning methods, including CNNs and Vision Transformers (ViTs), face challenges with complex blurs and high computational demands. A new dual-domain architecture integrates Vision Transformers with a frequency-domain FFT-ReLU module, enhancing the ability to suppress blur artifacts while preserving details, achieving superior performance metrics such as PSNR and SSIM in extensive experiments.
UHKD: A Unified Framework for Heterogeneous Knowledge Distillation via Frequency-Domain Representations
PositiveArtificial Intelligence
Unified Heterogeneous Knowledge Distillation (UHKD) is a proposed framework that enhances knowledge distillation (KD) by utilizing intermediate features in the frequency domain. This approach addresses the limitations of traditional KD methods, which are primarily designed for homogeneous models and struggle in heterogeneous environments. UHKD aims to improve model compression while maintaining accuracy, making it a significant advancement in the field of artificial intelligence.
LampQ: Towards Accurate Layer-wise Mixed Precision Quantization for Vision Transformers
PositiveArtificial Intelligence
The paper titled 'LampQ: Towards Accurate Layer-wise Mixed Precision Quantization for Vision Transformers' presents a new method for quantizing pre-trained Vision Transformer models. The proposed Layer-wise Mixed Precision Quantization (LampQ) addresses limitations in existing quantization methods, such as coarse granularity and metric scale mismatches. By employing a type-aware Fisher-based metric, LampQ aims to enhance both the efficiency and accuracy of quantization in various tasks, including image classification and object detection.