Pseudo Anomalies Are All You Need: Diffusion-Based Generation for Weakly-Supervised Video Anomaly Detection

arXiv — cs.CVTuesday, December 9, 2025 at 5:00:00 AM
  • A new approach to video anomaly detection, named PA-VAD, has been introduced, which utilizes synthesized pseudo-abnormal videos alongside real normal videos for training. This method circumvents the challenges posed by the scarcity of real abnormal footage, achieving high accuracy rates of 98.2% on the ShanghaiTech dataset and 82.5% on UCF-Crime.
  • The development of PA-VAD is significant as it enables more effective video anomaly detection without the need for extensive datasets of abnormal footage, thus reducing costs and improving accessibility for practical applications in surveillance and security.
  • This innovation reflects a growing trend in artificial intelligence where generative models and weakly-supervised learning techniques are increasingly employed to enhance detection capabilities across various domains, including zero-shot anomaly detection and customizable video analysis, addressing the limitations of traditional methods.
— via World Pulse Now AI Editorial System

Was this article worth reading? Share it

Recommended apps based on your readingExplore all apps
Continue Readings
WeatherDiffusion: Controllable Weather Editing in Intrinsic Space
PositiveArtificial Intelligence
WeatherDiffusion has been introduced as a diffusion-based framework that enables controllable weather editing in intrinsic space, utilizing an inverse renderer to estimate material properties and scene geometry from input images. This framework enhances the editing process by generating images based on specific weather conditions described in text prompts.
Mitigating Bias with Words: Inducing Demographic Ambiguity in Face Recognition Templates by Text Encoding
PositiveArtificial Intelligence
A novel strategy called Unified Text-Image Embedding (UTIE) has been proposed to mitigate demographic biases in face recognition systems by inducing demographic ambiguity in face embeddings. This approach enriches facial embeddings with information from various demographic groups, promoting fairer verification performance across different demographics.
Dynamic Facial Expressions Analysis Based Parkinson's Disease Auxiliary Diagnosis
PositiveArtificial Intelligence
A novel method for auxiliary diagnosis of Parkinson's disease (PD) has been proposed, utilizing dynamic facial expression analysis to identify hypomimia, a key symptom of the disorder. This approach employs a multimodal facial expression analysis network that integrates visual and textual features while maintaining the temporal dynamics of facial expressions, ultimately processed through an LSTM-based classification network.
Defect-aware Hybrid Prompt Optimization via Progressive Tuning for Zero-Shot Multi-type Anomaly Detection and Segmentation
PositiveArtificial Intelligence
A new study introduces a defect-aware hybrid prompt optimization method, termed DAPO, aimed at enhancing zero-shot multi-type anomaly detection and segmentation. This approach leverages high-level semantic information from vision-language models like CLIP, addressing the challenge of recognizing fine-grained anomaly types such as 'hole', 'cut', and 'scratch'.
DynaIP: Dynamic Image Prompt Adapter for Scalable Zero-shot Personalized Text-to-Image Generation
PositiveArtificial Intelligence
The Dynamic Image Prompt Adapter (DynaIP) has been introduced as a novel tool aimed at enhancing Personalized Text-to-Image (PT2I) generation, addressing key challenges such as maintaining concept fidelity and scalability for multi-subject personalization. This advancement allows for zero-shot PT2I without the need for test-time fine-tuning, leveraging multimodal diffusion transformers (MM-DiT) to improve image generation quality.
Decoupling Template Bias in CLIP: Harnessing Empty Prompts for Enhanced Few-Shot Learning
PositiveArtificial Intelligence
The study introduces a framework that utilizes empty prompts to mitigate template-sample similarity bias in the CLIP model, enhancing its few-shot learning capabilities. This approach reveals and reduces bias during pre-training and enforces correct alignment during fine-tuning, ultimately improving classification accuracy and robustness.
Shape and Texture Recognition in Large Vision-Language Models
NeutralArtificial Intelligence
The Large Shapes and Textures dataset (LAS&T) has been introduced to enhance the capabilities of Large Vision-Language Models (LVLMs) in recognizing and representing shapes and textures across various contexts. This dataset, created through unsupervised extraction from natural images, serves as a benchmark for evaluating the performance of leading models like CLIP and DINO in shape recognition tasks.
OpenMonoGS-SLAM: Monocular Gaussian Splatting SLAM with Open-set Semantics
PositiveArtificial Intelligence
OpenMonoGS-SLAM has been introduced as a pioneering monocular SLAM framework that integrates 3D Gaussian Splatting with open-set semantic understanding, enhancing the capabilities of simultaneous localization and mapping in robotics and autonomous systems. This development leverages advanced Visual Foundation Models to improve tracking and mapping accuracy in diverse environments.

Ready to build your own newsroom?

Subscribe once and get a personalised feed, podcast, newsletter, and notifications tuned to the topics you actually care about.