MultiPriv: Benchmarking Individual-Level Privacy Reasoning in Vision-Language Models

arXiv — cs.CVMonday, November 24, 2025 at 5:00:00 AM
  • The introduction of MultiPriv marks a significant advancement in the evaluation of individual-level privacy reasoning within Vision-Language Models (VLMs). This benchmark addresses the inadequacies of current privacy assessments, which primarily focus on privacy perception rather than the ability of VLMs to link distributed information and construct individual profiles. The framework includes a novel bilingual multimodal dataset that features synthetic individual profiles linked to sensitive attributes.
  • This development is crucial as it highlights the escalating privacy risks associated with VLMs, which have evolved beyond simple attribute recognition to more complex reasoning capabilities. By establishing a systematic approach to evaluate privacy reasoning, MultiPriv aims to enhance the accountability and safety of VLMs in applications where personal data is involved, thereby fostering trust in AI technologies.
  • The emergence of MultiPriv reflects a growing recognition of the need for robust privacy frameworks in AI, particularly as VLMs become increasingly integrated into various sectors, including autonomous driving and video intelligence. This shift towards prioritizing privacy reasoning aligns with broader discussions on ethical AI practices and the importance of safeguarding individual data in an era where AI systems are capable of sophisticated data processing and inference.
— via World Pulse Now AI Editorial System

Was this article worth reading? Share it

Recommended apps based on your readingExplore all apps
Continue Readings
Cross-Cultural Expert-Level Art Critique Evaluation with Vision-Language Models
NeutralArtificial Intelligence
A new evaluation framework for assessing the cultural interpretation capabilities of Vision-Language Models (VLMs) has been introduced, focusing on cross-cultural art critique. This tri-tier framework includes automated metrics, rubric-based scoring, and calibration against human ratings, revealing a 5.2% reduction in mean absolute error in cultural understanding assessments.
A Highly Efficient Diversity-based Input Selection for DNN Improvement Using VLMs
PositiveArtificial Intelligence
A recent study has introduced Concept-Based Diversity (CBD), a highly efficient metric for image inputs that utilizes Vision-Language Models (VLMs) to enhance the performance of Deep Neural Networks (DNNs) through improved input selection. This approach addresses the computational intensity and scalability issues associated with traditional diversity-based selection methods.
VMMU: A Vietnamese Multitask Multimodal Understanding and Reasoning Benchmark
NeutralArtificial Intelligence
The introduction of VMMU, a Vietnamese Multitask Multimodal Understanding and Reasoning Benchmark, aims to assess the capabilities of vision-language models (VLMs) in interpreting and reasoning over visual and textual information in Vietnamese. This benchmark includes 2.5k multimodal questions across seven diverse tasks, emphasizing genuine multimodal integration rather than text-only cues.
Subspace Alignment for Vision-Language Model Test-time Adaptation
PositiveArtificial Intelligence
A new approach called SubTTA has been proposed to enhance test-time adaptation (TTA) for Vision-Language Models (VLMs), addressing vulnerabilities to distribution shifts that can misguide adaptation through unreliable zero-shot predictions. SubTTA aligns the semantic subspaces of visual and textual modalities to improve the accuracy of predictions during adaptation.
Route, Retrieve, Reflect, Repair: Self-Improving Agentic Framework for Visual Detection and Linguistic Reasoning in Medical Imaging
PositiveArtificial Intelligence
A new framework named R^4 has been proposed to enhance medical image analysis by integrating Vision-Language Models (VLMs) into a multi-agent system that includes a Router, Retriever, Reflector, and Repairer, specifically focusing on chest X-ray analysis. This approach aims to improve reasoning, safety, and spatial grounding in medical imaging workflows.
Semantic Misalignment in Vision-Language Models under Perceptual Degradation
NeutralArtificial Intelligence
Recent research has highlighted significant semantic misalignment in Vision-Language Models (VLMs) when subjected to perceptual degradation, particularly through controlled visual perception challenges using the Cityscapes dataset. This study reveals that while traditional segmentation metrics show only moderate declines, VLMs exhibit severe failures in downstream tasks, including hallucinations and inconsistent safety judgments.
CoMa: Contextual Massing Generation with Vision-Language Models
PositiveArtificial Intelligence
The CoMa project has introduced an innovative automated framework for generating building massing, addressing the complexities of architectural design by utilizing functional requirements and site context. This framework is supported by the newly developed CoMa-20K dataset, which includes detailed geometries and contextual data.
VULCA-Bench: A Multicultural Vision-Language Benchmark for Evaluating Cultural Understanding
NeutralArtificial Intelligence
VULCA-Bench has been introduced as a multicultural benchmark aimed at evaluating the cultural understanding of Vision-Language Models (VLMs) through a comprehensive framework that spans various cultural traditions. This benchmark includes 7,410 matched image-critique pairs and emphasizes higher-order cultural interpretation rather than just basic visual perception.

Ready to build your own newsroom?

Subscribe to unlock a personalised feed, podcasts, newsletters, and notifications tailored to the topics you actually care about