Chain-of-Visual-Thought: Teaching VLMs to See and Think Better with Continuous Visual Tokens

arXiv — cs.LGTuesday, December 2, 2025 at 5:00:00 AM
  • A new framework called Chain-of-Visual-Thought (COVT) has been introduced to enhance Vision-Language Models (VLMs) by enabling them to reason using continuous visual tokens, which capture dense visual information. This approach aims to improve VLMs' perceptual understanding, particularly in spatial reasoning and geometric awareness, by distilling knowledge from lightweight vision experts within a limited token budget.
  • The development of COVT is significant as it addresses the current limitations of VLMs, which excel in linguistic reasoning but struggle with complex visual tasks. By incorporating continuous visual tokens, COVT enhances the models' ability to process and understand visual data, potentially leading to more accurate and nuanced outputs in applications requiring visual comprehension.
  • This advancement reflects a broader trend in AI research focusing on improving the integration of visual and linguistic processing. As various frameworks like AVA-VLA and Evo-0 also seek to enhance visual understanding in dynamic contexts, the ongoing exploration of visual reasoning capabilities in VLMs highlights the importance of developing models that can effectively bridge the gap between visual perception and language.
— via World Pulse Now AI Editorial System

Was this article worth reading? Share it

Recommended apps based on your readingExplore all apps
Continue Readings
Cross-Cultural Expert-Level Art Critique Evaluation with Vision-Language Models
NeutralArtificial Intelligence
A new evaluation framework for assessing the cultural interpretation capabilities of Vision-Language Models (VLMs) has been introduced, focusing on cross-cultural art critique. This tri-tier framework includes automated metrics, rubric-based scoring, and calibration against human ratings, revealing a 5.2% reduction in mean absolute error in cultural understanding assessments.
A Highly Efficient Diversity-based Input Selection for DNN Improvement Using VLMs
PositiveArtificial Intelligence
A recent study has introduced Concept-Based Diversity (CBD), a highly efficient metric for image inputs that utilizes Vision-Language Models (VLMs) to enhance the performance of Deep Neural Networks (DNNs) through improved input selection. This approach addresses the computational intensity and scalability issues associated with traditional diversity-based selection methods.
ClimateIQA: A New Dataset and Benchmark to Advance Vision-Language Models in Meteorology Anomalies Analysis
PositiveArtificial Intelligence
A new dataset named ClimateIQA has been introduced to enhance the capabilities of Vision-Language Models (VLMs) in analyzing meteorological anomalies. This dataset, which includes 26,280 high-quality images, aims to address the challenges faced by existing models like GPT-4o and Qwen-VL in interpreting complex meteorological heatmaps characterized by irregular shapes and color variations.
Semantic Misalignment in Vision-Language Models under Perceptual Degradation
NeutralArtificial Intelligence
Recent research has highlighted significant semantic misalignment in Vision-Language Models (VLMs) when subjected to perceptual degradation, particularly through controlled visual perception challenges using the Cityscapes dataset. This study reveals that while traditional segmentation metrics show only moderate declines, VLMs exhibit severe failures in downstream tasks, including hallucinations and inconsistent safety judgments.
CoMa: Contextual Massing Generation with Vision-Language Models
PositiveArtificial Intelligence
The CoMa project has introduced an innovative automated framework for generating building massing, addressing the complexities of architectural design by utilizing functional requirements and site context. This framework is supported by the newly developed CoMa-20K dataset, which includes detailed geometries and contextual data.
VideoHEDGE: Entropy-Based Hallucination Detection for Video-VLMs via Semantic Clustering and Spatiotemporal Perturbations
NeutralArtificial Intelligence
A new framework named VideoHEDGE has been introduced to detect hallucinations in video-capable vision-language models (Video-VLMs), addressing the frequent inaccuracies in video question answering. This system employs entropy-based reliability estimation and semantic clustering to evaluate the correctness of generated answers against video-question pairs.
VULCA-Bench: A Multicultural Vision-Language Benchmark for Evaluating Cultural Understanding
NeutralArtificial Intelligence
VULCA-Bench has been introduced as a multicultural benchmark aimed at evaluating the cultural understanding of Vision-Language Models (VLMs) through a comprehensive framework that spans various cultural traditions. This benchmark includes 7,410 matched image-critique pairs and emphasizes higher-order cultural interpretation rather than just basic visual perception.
Latent Reconstruction from Generated Data for Multimodal Misinformation Detection
PositiveArtificial Intelligence
A new framework named 'MisCaption This!' has been introduced to generate high-fidelity synthetic datasets for multimodal misinformation detection, addressing the challenges posed by miscaptioned images that misrepresent their context or meaning. This framework utilizes Adversarial Prompting of Vision-Language Models (VLMs) and is complemented by a Transformer-based network called LAMAR, which reconstructs truthful caption embeddings to enhance detection accuracy.

Ready to build your own newsroom?

Subscribe to unlock a personalised feed, podcasts, newsletters, and notifications tailored to the topics you actually care about