VLM-NCD:Novel Class Discovery with Vision-Based Large Language Models

arXiv — cs.CVFriday, December 12, 2025 at 5:00:00 AM
  • The recent introduction of VLM-NCD, a novel class discovery framework utilizing vision-based large language models, aims to enhance the classification and discovery of unknown classes from unlabelled data. This approach addresses the limitations of existing methods that primarily rely on visual features, which often struggle with feature discriminability and data distribution challenges.
  • This development is significant as it demonstrates a marked improvement in accuracy for unknown classes, achieving up to 25.3% better performance on the CIFAR-100 dataset compared to current methodologies. The innovative dual-phase discovery mechanism and the fusion of visual-textual semantics position VLM-NCD as a potential game-changer in the field of machine learning.
  • The advancement of VLM-NCD reflects a broader trend in artificial intelligence towards integrating multimodal data to improve learning outcomes. As challenges such as class uncertainty and noisy labels persist in deep learning, frameworks like VLM-NCD, alongside other emerging methods, highlight the ongoing efforts to refine classification techniques and enhance model robustness in complex environments.
— via World Pulse Now AI Editorial System

Was this article worth reading? Share it

Recommended apps based on your readingExplore all apps
Continue Readings
AEBNAS: Strengthening Exit Branches in Early-Exit Networks through Hardware-Aware Neural Architecture Search
PositiveArtificial Intelligence
AEBNAS introduces a hardware-aware Neural Architecture Search (NAS) framework designed to enhance early-exit networks, which optimize energy consumption and latency in deep learning models by allowing for intermediate exit branches based on input complexity. This approach aims to balance efficiency and performance, particularly for resource-constrained devices.
Sample-wise Adaptive Weighting for Transfer Consistency in Adversarial Distillation
PositiveArtificial Intelligence
A new approach called Sample-wise Adaptive Adversarial Distillation (SAAD) has been proposed to enhance adversarial robustness in neural networks by reweighting training examples based on their transferability. This method addresses the issue of robust saturation, where stronger teacher networks do not necessarily lead to more robust student networks, and aims to improve the effectiveness of adversarial training without incurring additional computational costs.

Ready to build your own newsroom?

Subscribe to unlock a personalised feed, podcasts, newsletters, and notifications tailored to the topics you actually care about