HV-Attack: Hierarchical Visual Attack for Multimodal Retrieval Augmented Generation

arXiv — cs.CVThursday, November 20, 2025 at 5:00:00 AM
  • The study introduces a novel Hierarchical Visual Attack targeting multimodal Retrieval
  • The implications of this research are critical as it underscores the need for enhanced security measures in Large Multimodal Models (LMMs), which are increasingly used in various applications. The findings raise alarms about the safety and reliability of these systems in real
  • This development is part of a broader discourse on the vulnerabilities of AI systems, particularly in the context of adversarial attacks. The ongoing exploration of transferability of such attacks across different models indicates a pressing need for robust defenses against emerging threats in the AI landscape.
— via World Pulse Now AI Editorial System

Was this article worth reading? Share it

Recommended Readings
Automated Detection of Visual Attribute Reliance with a Self-Reflective Agent
PositiveArtificial Intelligence
The paper presents an automated framework for detecting visual attribute reliance in trained vision models. It introduces a self-reflective agent that generates and tests hypotheses about the visual attributes influencing model predictions. This iterative process allows the agent to refine its hypotheses based on experimental results and assess the accuracy of its findings, ensuring model robustness and preventing overfitting.
Unbiased Semantic Decoding with Vision Foundation Models for Few-shot Segmentation
PositiveArtificial Intelligence
The paper presents an Unbiased Semantic Decoding (USD) strategy integrated with the Segment Anything Model (SAM) for few-shot segmentation tasks. This approach aims to enhance the model's generalization ability by extracting target information from both support and query sets simultaneously, addressing the limitations of previous methods that relied heavily on explicit prompts. The study highlights the potential of USD in improving segmentation accuracy across unknown classes.
D4C: Data-free Quantization for Contrastive Language-Image Pre-training Models
PositiveArtificial Intelligence
Data-Free Quantization (DFQ) presents a solution for model compression without needing real data, which is beneficial in privacy-sensitive contexts. While DFQ has been effective for unimodal models, its application to Vision-Language Models like CLIP has not been thoroughly investigated. This study introduces D4C, a DFQ framework specifically designed for CLIP, addressing challenges such as semantic content and intra-image diversity in synthesized samples.
FarSLIP: Discovering Effective CLIP Adaptation for Fine-Grained Remote Sensing Understanding
PositiveArtificial Intelligence
The article discusses the limitations of the CLIP model in capturing fine-grained details in remote sensing (RS) data. It highlights two main issues: the underutilization of object-level supervision in RS image-text datasets and the performance degradation of region-text alignment methods when applied to RS data. To address these challenges, the authors introduce the MGRS-200k dataset, which provides rich object-level textual supervision for improved RS region-category alignment.
Hierarchical Semantic Tree Anchoring for CLIP-Based Class-Incremental Learning
PositiveArtificial Intelligence
The paper presents HASTEN (Hierarchical Semantic Tree Anchoring), a novel approach for Class-Incremental Learning (CIL) that integrates hierarchical information to mitigate catastrophic forgetting. It leverages external knowledge graphs to enhance the learning of visual and textual features, addressing the limitations of existing CLIP-based CIL methods that fail to capture inherent hierarchies in visual and linguistic concepts.
A Hybrid Multimodal Deep Learning Framework for Intelligent Fashion Recommendation
PositiveArtificial Intelligence
The paper presents a hybrid multimodal deep learning framework designed for intelligent fashion recommendation, addressing outfit compatibility prediction and complementary item retrieval. Utilizing the CLIP architecture, the model integrates visual and textual encoders to create joint representations of fashion items. It achieves a high AUC of 0.95 for compatibility prediction on the Polyvore dataset and an accuracy of 69.24% for retrieving compatible items based on a target item description.
FairJudge: MLLM Judging for Social Attributes and Prompt Image Alignment
PositiveArtificial Intelligence
FairJudge is a new protocol designed to evaluate the alignment of images with prompts in text-to-image (T2I) systems while addressing social attributes. It utilizes a rubric that scores alignment from -1 to 1, ensuring judgments are based on visible content and requiring abstention when cues are insufficient. This approach aims to provide accountable decisions and enhance evaluation fairness, particularly regarding gender, race, and age.
AdCare-VLM: Towards a Unified and Pre-aligned Latent Representation for Healthcare Video Understanding
PositiveArtificial Intelligence
AdCare-VLM is a new multimodal large vision language model designed to enhance medication adherence for chronic diseases such as diabetes, hypertension, and tuberculosis. By utilizing a unified visual latent space, it aims to improve visual question answering related to patient videos. The model is fine-tuned using a dataset of 806 annotated tuberculosis medication monitoring videos, enabling better detection of adherence patterns.