Do MLLMs Exhibit Human-like Perceptual Behaviors? HVSBench: A Benchmark for MLLM Alignment with Human Perceptual Behavior

arXiv — cs.CVThursday, December 18, 2025 at 5:00:00 AM
  • A new benchmark called HVSBench has been introduced to evaluate the alignment of Multimodal Large Language Models (MLLMs) with human perceptual behavior, revealing a significant gap in performance compared to human participants. The benchmark consists of over 85,000 samples across various perceptual categories, highlighting the limitations of current MLLMs in mimicking human visual processing.
  • This development is crucial as it underscores the need for advancements in MLLMs to achieve better alignment with human perceptual systems, which is essential for creating more reliable and explainable AI technologies. The findings indicate that despite their capabilities, MLLMs still fall short in human-like visual interpretation.
  • The introduction of HVSBench reflects a growing recognition of the importance of human-like perception in AI, as evidenced by other recent frameworks aimed at improving visual understanding and reducing hallucinations in MLLMs. This trend highlights ongoing challenges in AI development, including the need for better visual reasoning, addressing biases, and enhancing the overall interpretability of AI systems.
— via World Pulse Now AI Editorial System

Was this article worth reading? Share it

Recommended apps based on your readingExplore all apps
Continue Readings
Where Does Vision Meet Language? Understanding and Refining Visual Fusion in MLLMs via Contrastive Attention
PositiveArtificial Intelligence
A recent study has explored the integration of visual and textual information in Multimodal Large Language Models (MLLMs), revealing that visual-text fusion occurs at specific layers within these models rather than uniformly across the network. The research highlights a late-stage
Ground What You See: Hallucination-Resistant MLLMs via Caption Feedback, Diversity-Aware Sampling, and Conflict Regularization
PositiveArtificial Intelligence
A recent study has introduced a framework aimed at mitigating hallucination issues in Multimodal Large Language Models (MLLMs) during Reinforcement Learning (RL) optimization. The research identifies key factors contributing to hallucinations, including over-reliance on visual reasoning and insufficient exploration diversity. The proposed framework incorporates modules for caption feedback, diversity-aware sampling, and conflict regularization to enhance model reliability.
KidVis: Do Multimodal Large Language Models Possess the Visual Perceptual Capabilities of a 6-Year-Old?
NeutralArtificial Intelligence
A new benchmark called KidVis has been introduced to evaluate the visual perceptual capabilities of Multimodal Large Language Models (MLLMs), specifically assessing their performance against that of 6-7 year old children across six atomic visual capabilities. The results reveal a significant performance gap, with human children scoring an average of 95.32 compared to GPT-5's score of 67.33.
Application of Ideal Observer for Thresholded Data in Search Task
PositiveArtificial Intelligence
A recent study has introduced an anthropomorphic thresholded visual-search model observer, enhancing task-based image quality assessment by mimicking the human visual system. This model selectively processes high-salience features, improving discrimination performance and diagnostic accuracy while filtering out irrelevant variability.
PRISM: Self-Pruning Intrinsic Selection Method for Training-Free Multimodal Data Selection
PositiveArtificial Intelligence
A new method called PRISM has been introduced to optimize the selection of training data for Multimodal Large Language Models (MLLMs), addressing the redundancy in rapidly growing datasets that increases computational costs. This self-pruning intrinsic selection method aims to enhance efficiency without the need for extensive training or proxy-based inference techniques.
MoHoBench: Assessing Honesty of Multimodal Large Language Models via Unanswerable Visual Questions
NeutralArtificial Intelligence
A recent study introduced MoHoBench, a benchmark designed to assess the honesty of Multimodal Large Language Models (MLLMs) when confronted with unanswerable visual questions. This research highlights the need for a systematic evaluation of MLLMs' response behaviors, as their trustworthiness in generating content remains underexplored.

Ready to build your own newsroom?

Subscribe to unlock a personalised feed, podcasts, newsletters, and notifications tailored to the topics you actually care about