Pre-train to Gain: Robust Learning Without Clean Labels

arXiv — cs.LGThursday, November 27, 2025 at 5:00:00 AM
  • A new study introduces a method for training deep networks with noisy labels, which often leads to poor performance due to overfitting. By utilizing self-supervised learning techniques like SimCLR and Barlow Twins, researchers demonstrate that pre-training a feature extractor without labels can enhance model robustness when subsequently trained on noisy datasets. This approach was evaluated on CIFAR-10 and CIFAR-100 datasets, showing consistent improvements in classification accuracy across various noise levels.
  • This development is significant as it addresses a common challenge in machine learning, where the presence of noisy labels can severely hinder model performance. By eliminating the need for a clean subset of data, this method allows for more efficient training processes and potentially broader applications in real-world scenarios, where clean labels are often unavailable.
  • The findings resonate with ongoing discussions in the AI community regarding the challenges of noisy data and the need for robust learning frameworks. Other recent advancements, such as Active Negative Loss and Reinforcement Learning for Noisy Label Correction, further emphasize the importance of developing methodologies that can effectively handle label noise, highlighting a growing trend towards improving model reliability in diverse and challenging environments.
— via World Pulse Now AI Editorial System

Was this article worth reading? Share it

Recommended apps based on your readingExplore all apps
Continue Readings
Restora-Flow: Mask-Guided Image Restoration with Flow Matching
PositiveArtificial Intelligence
Restora-Flow has been introduced as a training-free method for image restoration that utilizes flow matching sampling guided by a degradation mask. This innovative approach aims to enhance the quality of image restoration tasks such as inpainting, super-resolution, and denoising while addressing the long processing times and over-smoothing issues faced by existing methods.
RobustMerge: Parameter-Efficient Model Merging for MLLMs with Direction Robustness
PositiveArtificial Intelligence
RobustMerge has been introduced as a parameter-efficient model merging method designed for multi-task learning in machine learning language models (MLLMs), emphasizing direction robustness during the merging process. This approach addresses the challenges of merging expert models without data leakage, which has become increasingly important as model sizes and data complexity grow.
EmoFeedback$^2$: Reinforcement of Continuous Emotional Image Generation via LVLM-based Reward and Textual Feedback
PositiveArtificial Intelligence
The recent introduction of EmoFeedback$^2$ aims to enhance continuous emotional image generation (C-EICG) by utilizing a large vision-language model (LVLM) to provide reward and textual feedback, addressing the limitations of existing methods that struggle with emotional continuity and fidelity. This paradigm allows for better alignment of generated images with user emotional descriptions.
BengaliFig: A Low-Resource Challenge for Figurative and Culturally Grounded Reasoning in Bengali
PositiveArtificial Intelligence
BengaliFig has been introduced as a new challenge set aimed at evaluating figurative and culturally grounded reasoning in Bengali, a language that is considered low-resource. The dataset comprises 435 unique riddles from Bengali traditions, annotated across five dimensions to assess reasoning types and cultural depth, and is designed for use with large language models (LLMs).
DesignPref: Capturing Personal Preferences in Visual Design Generation
PositiveArtificial Intelligence
The introduction of DesignPref marks a significant advancement in the field of visual design generation, presenting a dataset of 12,000 pairwise comparisons of UI designs rated by 20 professional designers. This dataset highlights the subjective nature of design preferences, revealing substantial disagreement among trained designers, as indicated by a Krippendorff's alpha of 0.25 for binary preferences.
Gram2Vec: An Interpretable Document Vectorizer
PositiveArtificial Intelligence
Gram2Vec is introduced as a grammatical style embedding system that transforms documents into a higher dimensional space by analyzing the normalized relative frequencies of grammatical features in the text. This method offers inherent interpretability compared to traditional neural approaches, with applications demonstrated in authorship verification and AI detection.
When to Think and When to Look: Uncertainty-Guided Lookback
PositiveArtificial Intelligence
A systematic analysis of test-time thinking in large vision-language models (LVLMs) has been conducted, revealing that generating explicit intermediate reasoning chains can enhance performance, but excessive thinking may lead to incorrect outcomes. The study evaluated ten variants from the InternVL3.5 and Qwen3-VL families on the MMMU-val dataset, highlighting the importance of short lookback phrases that refer back to the image for successful visual reasoning.
Quantifying Modality Contributions via Disentangling Multimodal Representations
PositiveArtificial Intelligence
A new framework has been proposed to quantify modality contributions in multimodal models by utilizing Partial Information Decomposition (PID). This approach addresses the limitations of existing methods that conflate contribution with performance metrics, particularly in cross-attention architectures where modalities interact. The algorithm developed enables scalable, inference-only analysis of predictive information in internal embeddings.