VLMDiff: Leveraging Vision-Language Models for Multi-Class Anomaly Detection with Diffusion

arXiv — cs.CVWednesday, November 12, 2025 at 5:00:00 AM
The introduction of VLMDiff marks a significant advancement in the field of visual anomaly detection. By integrating a Latent Diffusion Model with a Vision-Language Model, VLMDiff addresses the challenges of detecting anomalies in diverse, multi-class images. Traditional methods often rely on synthetic noise generation and require extensive per-class model training, which limits scalability. In contrast, VLMDiff utilizes pre-trained Vision-Language Models to generate normal captions without manual annotations, conditioning the diffusion model to learn robust representations of normal image features. This novel approach has demonstrated competitive performance, improving the pixel-level Per-Region-Overlap (PRO) metric by up to 25 points on the Real-IAD dataset and 8 points on the COCO-AD dataset, thus outperforming state-of-the-art diffusion-based methods. The availability of the code on GitHub further facilitates the adoption and exploration of this innovative framework.
— via World Pulse Now AI Editorial System

Was this article worth reading? Share it

Recommended Readings
Let Language Constrain Geometry: Vision-Language Models as Semantic and Spatial Critics for 3D Generation
PositiveArtificial Intelligence
The paper introduces VLM3D, a novel framework that utilizes vision-language models (VLMs) to enhance text-to-3D generation. It addresses two major limitations in current models: the lack of fine-grained semantic alignment and inadequate 3D spatial understanding. VLM3D employs a dual-query critic signal to evaluate both semantic fidelity and geometric coherence, significantly improving the generation process. The framework demonstrates its effectiveness across different paradigms, marking a step forward in 3D generation technology.
GMAT: Grounded Multi-Agent Clinical Description Generation for Text Encoder in Vision-Language MIL for Whole Slide Image Classification
PositiveArtificial Intelligence
The article presents a new framework called GMAT, which enhances Multiple Instance Learning (MIL) for whole slide image (WSI) classification. By integrating vision-language models (VLMs), GMAT aims to improve the generation of clinical descriptions that are more expressive and medically specific. This addresses limitations in existing methods that rely on large language models (LLMs) for generating descriptions, which often lack domain grounding and detailed medical specificity, thus improving alignment with visual features.
Doubly Debiased Test-Time Prompt Tuning for Vision-Language Models
PositiveArtificial Intelligence
The paper discusses the challenges of test-time prompt tuning for vision-language models, highlighting the issue of prompt optimization bias that can lead to suboptimal performance in downstream tasks. It identifies two main causes: the model's focus on entropy minimization, which may overlook prediction accuracy, and data misalignment between visual and textual modalities. To address these issues, the authors propose a new method called Doubly Debiased Test-Time Prompt Tuning, aimed at improving model performance in zero-shot settings.
Concept-as-Tree: A Controllable Synthetic Data Framework Makes Stronger Personalized VLMs
PositiveArtificial Intelligence
The paper titled 'Concept-as-Tree: A Controllable Synthetic Data Framework Makes Stronger Personalized VLMs' discusses the advancements in Vision-Language Models (VLMs) aimed at enhancing personalization. It highlights the challenges posed by the lack of user-provided positive samples and the poor quality of negative samples. To address these issues, the authors introduce the Concept-as-Tree (CaT) framework, which generates diverse positive and negative samples, thus improving VLM performance in personalization tasks.
NeuS-QA: Grounding Long-Form Video Understanding in Temporal Logic and Neuro-Symbolic Reasoning
NeutralArtificial Intelligence
NeuS-QA is a new neuro-symbolic pipeline designed to enhance Long Video Question Answering (LVQA) by addressing the limitations of traditional vision-language models (VLMs). While VLMs perform well with single images and short videos, they struggle with LVQA due to the need for complex temporal reasoning. NeuS-QA offers a training-free, plug-and-play solution that improves interpretability by ensuring only logic-verified segments are processed by the VLM, thus enhancing the model's ability to understand long-form video content.
Zero-Shot Temporal Interaction Localization for Egocentric Videos
PositiveArtificial Intelligence
The paper titled 'Zero-Shot Temporal Interaction Localization for Egocentric Videos' presents a novel approach called EgoLoc, aimed at improving the localization of human-object interactions in egocentric videos. Traditional methods rely heavily on annotated action and object categories, leading to domain bias and inefficiencies. EgoLoc introduces a self-adaptive sampling strategy to enhance visual prompts for vision-language model reasoning, ultimately achieving better temporal interaction localization.
Human-Corrected Labels Learning: Enhancing Labels Quality via Human Correction of VLMs Discrepancies
PositiveArtificial Intelligence
The article discusses the introduction of Human-Corrected Labels (HCLs) to improve the quality of labels generated by Vision-Language Models (VLMs). It highlights the issues of low-quality labels and the lack of error correction in VLM outputs. The proposed method involves human intervention to correct discrepancies in VLM-generated labels, leading to enhanced annotation quality and reduced labor costs, supported by extensive experimental results.
Visual Document Understanding and Reasoning: A Multi-Agent Collaboration Framework with Agent-Wise Adaptive Test-Time Scaling
PositiveArtificial Intelligence
The article introduces MACT, a Multi-Agent Collaboration framework designed to enhance understanding and reasoning in Vision-Language Models (VLMs). It addresses the limitations of monolithic scaling by implementing agent-wise adaptive test-time scaling, which allows for dynamic adjustments based on the functional entities involved in visual document processing. MACT comprises four specialized agents—planning, execution, judgment, and answer—aiming to improve cognitive overload management and ensure factual accuracy through a self-correction loop.