MultiPriv: Benchmarking Individual-Level Privacy Reasoning in Vision-Language Models

arXiv — cs.CVMonday, November 24, 2025 at 5:00:00 AM
  • The introduction of MultiPriv marks a significant advancement in the evaluation of individual-level privacy reasoning within Vision-Language Models (VLMs). This benchmark addresses the inadequacies of current privacy assessments, which primarily focus on privacy perception rather than the ability of VLMs to link distributed information and construct individual profiles. The framework includes a novel bilingual multimodal dataset that features synthetic individual profiles linked to sensitive attributes.
  • This development is crucial as it highlights the escalating privacy risks associated with VLMs, which have evolved beyond simple attribute recognition to more complex reasoning capabilities. By establishing a systematic approach to evaluate privacy reasoning, MultiPriv aims to enhance the accountability and safety of VLMs in applications where personal data is involved, thereby fostering trust in AI technologies.
  • The emergence of MultiPriv reflects a growing recognition of the need for robust privacy frameworks in AI, particularly as VLMs become increasingly integrated into various sectors, including autonomous driving and video intelligence. This shift towards prioritizing privacy reasoning aligns with broader discussions on ethical AI practices and the importance of safeguarding individual data in an era where AI systems are capable of sophisticated data processing and inference.
— via World Pulse Now AI Editorial System

Was this article worth reading? Share it

Recommended apps based on your readingExplore all apps
Continue Readings
MMT-ARD: Multimodal Multi-Teacher Adversarial Distillation for Robust Vision-Language Models
PositiveArtificial Intelligence
A new framework called MMT-ARD has been proposed to enhance the robustness of Vision-Language Models (VLMs) through a Multimodal Multi-Teacher Adversarial Distillation approach. This method addresses the limitations of traditional single-teacher distillation by incorporating a dual-teacher knowledge fusion architecture, which optimizes both clean feature preservation and robust feature enhancement.
Aligning Vision to Language: Annotation-Free Multimodal Knowledge Graph Construction for Enhanced LLMs Reasoning
PositiveArtificial Intelligence
A novel approach called Vision-align-to-Language integrated Knowledge Graph (VaLiK) has been proposed to enhance reasoning in Large Language Models (LLMs) by constructing Multimodal Knowledge Graphs (MMKGs) without the need for manual annotations. This method aims to address challenges such as incomplete knowledge and hallucination artifacts that LLMs face due to the limitations of traditional Knowledge Graphs (KGs).
PhyBlock: A Progressive Benchmark for Physical Understanding and Planning via 3D Block Assembly
NeutralArtificial Intelligence
PhyBlock has been introduced as a progressive benchmark aimed at evaluating vision-language models (VLMs) on their physical understanding and planning capabilities through robotic 3D block assembly tasks. This benchmark features a four-level cognitive hierarchy assembly task and includes 2,600 tasks to assess models on spatial reasoning and physical comprehension.
SPEAR-1: Scaling Beyond Robot Demonstrations via 3D Understanding
PositiveArtificial Intelligence
SPEAR-1 has been introduced as a significant advancement in the field of robotic foundation models, aiming to enhance the generalization capabilities of robots across diverse environments and tasks. This initiative addresses the limitations of existing models that primarily rely on 2D image-language tasks, which do not adequately support 3D spatial reasoning necessary for effective robotic control.
MOCHA: Multi-modal Objects-aware Cross-arcHitecture Alignment
PositiveArtificial Intelligence
MOCHA, a new distillation framework, has been introduced to enhance personalized object detection by transferring multimodal knowledge from a frozen vision-language model (VLM) to a lightweight vision-only detector. This approach enables the effective recognition of user-specific instances from minimal examples without requiring modifications to the teacher model during inference.
Do Vision-Language Models Understand Visual Persuasiveness?
NeutralArtificial Intelligence
Recent research has examined whether Vision-Language Models (VLMs) comprehend visual persuasion, which influences human attitudes and decisions. A new dataset was created for binary persuasiveness judgment, introducing a taxonomy of Visual Persuasive Factors (VPFs) that includes various levels of visual cues. The analysis indicates that VLMs tend to overestimate high persuasiveness and struggle with low/mid-level features, while high-level semantic alignment is a strong predictor of human judgment.
Vision Language Models are Confused Tourists
NegativeArtificial Intelligence
Recent evaluations of Vision-Language Models (VLMs) have revealed significant vulnerabilities, particularly in their ability to handle diverse cultural inputs. The introduction of the ConfusedTourist framework aims to assess these models' robustness against geographical perturbations, highlighting a concerning drop in accuracy when faced with complex cultural cues.
Lost in Translation and Noise: A Deep Dive into the Failure Modes of VLMs on Real-World Tables
NeutralArtificial Intelligence
The introduction of MirageTVQA, a new benchmark for evaluating Vision-Language Models (VLMs), highlights the significant performance gaps in existing datasets that primarily focus on monolingual and visually perfect tables. This benchmark includes nearly 60,000 QA pairs across 24 languages and incorporates realistic noise to better reflect real-world scenarios.