LookWhere? Efficient Visual Recognition by Learning Where to Look and What to See from Self-Supervision

arXiv — cs.CVTuesday, December 9, 2025 at 5:00:00 AM

Was this article worth reading? Share it

Recommended apps based on your readingExplore all apps
Continue Readings
Enabling Validation for Robust Few-Shot Recognition
PositiveArtificial Intelligence
A recent study on Few-Shot Recognition (FSR) highlights the challenges of training Vision-Language Models (VLMs) with minimal labeled data, particularly the lack of validation data. The research proposes utilizing retrieved open data for validation, despite its out-of-distribution nature, which may degrade performance but offers a potential solution to the data scarcity issue.
Fast-ARDiff: An Entropy-informed Acceleration Framework for Continuous Space Autoregressive Generation
PositiveArtificial Intelligence
The Fast-ARDiff framework has been introduced as an innovative solution to enhance the efficiency of continuous space autoregressive generation by optimizing both autoregressive and diffusion components, thereby reducing latency in image synthesis processes. This framework employs an entropy-informed speculative strategy to improve representation alignment and integrates diffusion decoding into a unified end-to-end system.
Repulsor: Accelerating Generative Modeling with a Contrastive Memory Bank
PositiveArtificial Intelligence
A new framework named Repulsor has been introduced to enhance generative modeling by utilizing a contrastive memory bank, which eliminates the need for external encoders and addresses inefficiencies in representation learning. This method allows for a dynamic queue of negative samples, improving the training process of generative models without the overhead of pre-trained encoders.
DASH: A Meta-Attack Framework for Synthesizing Effective and Stealthy Adversarial Examples
PositiveArtificial Intelligence
The introduction of DAASH, a meta-attack framework, marks a significant advancement in generating effective and perceptually aligned adversarial examples, addressing the limitations of traditional Lp-norm constrained methods. This framework strategically composes existing attack methods in a multi-stage process, enhancing the perceptual alignment of adversarial examples.
The Missing Point in Vision Transformers for Universal Image Segmentation
PositiveArtificial Intelligence
A novel two-stage segmentation framework named ViT-P has been introduced to enhance image segmentation tasks in computer vision. This framework decouples mask generation from classification, utilizing a proposal generator for class-agnostic mask proposals and a point-based classification model based on Vision Transformers to refine predictions. The approach aims to address challenges such as ambiguous boundaries and imbalanced class distributions in mask classification.
Rethinking Training Dynamics in Scale-wise Autoregressive Generation
PositiveArtificial Intelligence
Recent advancements in autoregressive generative models have led to the introduction of Self-Autoregressive Refinement (SAR), which aims to improve image generation quality by addressing exposure bias and optimization complexity. The proposed Stagger-Scale Rollout (SSR) mechanism allows models to learn from their intermediate predictions, enhancing the training dynamics in scale-wise autoregressive generation.
Distribution Matching Variational AutoEncoder
NeutralArtificial Intelligence
The Distribution-Matching Variational AutoEncoder (DMVAE) has been introduced to address limitations in existing visual generative models, which often compress images into a latent space without explicitly shaping its distribution. DMVAE aligns the encoder's latent distribution with an arbitrary reference distribution, allowing for a more flexible modeling approach beyond the conventional Gaussian prior.
One Layer Is Enough: Adapting Pretrained Visual Encoders for Image Generation
PositiveArtificial Intelligence
A new framework called Feature Auto-Encoder (FAE) has been introduced to adapt pre-trained visual representations for image generation, addressing challenges in aligning high-dimensional features with low-dimensional generative models. This approach aims to simplify the adaptation process, enhancing the efficiency and quality of generated images.