VLMDiff: Leveraging Vision-Language Models for Multi-Class Anomaly Detection with Diffusion

arXiv — cs.CVWednesday, November 12, 2025 at 5:00:00 AM
The introduction of VLMDiff marks a significant advancement in the field of visual anomaly detection. By integrating a Latent Diffusion Model with a Vision-Language Model, VLMDiff addresses the challenges of detecting anomalies in diverse, multi-class images. Traditional methods often rely on synthetic noise generation and require extensive per-class model training, which limits scalability. In contrast, VLMDiff utilizes pre-trained Vision-Language Models to generate normal captions without manual annotations, conditioning the diffusion model to learn robust representations of normal image features. This novel approach has demonstrated competitive performance, improving the pixel-level Per-Region-Overlap (PRO) metric by up to 25 points on the Real-IAD dataset and 8 points on the COCO-AD dataset, thus outperforming state-of-the-art diffusion-based methods. The availability of the code on GitHub further facilitates the adoption and exploration of this innovative framework.
— via World Pulse Now AI Editorial System

Was this article worth reading? Share it

Recommended apps based on your readingExplore all apps
Continue Readings
Cascading multi-agent anomaly detection in surveillance systems via vision-language models and embedding-based classification
PositiveArtificial Intelligence
A new framework for cascading multi-agent anomaly detection in surveillance systems has been introduced, utilizing vision-language models and embedding-based classification to enhance real-time performance and semantic interpretability. This approach integrates various methodologies, including reconstruction-gated filtering and object-level assessments, to address the complexities of detecting anomalies in dynamic visual environments.
VMMU: A Vietnamese Multitask Multimodal Understanding and Reasoning Benchmark
NeutralArtificial Intelligence
The introduction of VMMU, a Vietnamese Multitask Multimodal Understanding and Reasoning Benchmark, aims to assess the capabilities of vision-language models (VLMs) in interpreting and reasoning over visual and textual information in Vietnamese. This benchmark includes 2.5k multimodal questions across seven diverse tasks, emphasizing genuine multimodal integration rather than text-only cues.

Ready to build your own newsroom?

Subscribe to unlock a personalised feed, podcasts, newsletters, and notifications tailored to the topics you actually care about