SPEED-Q: Staged Processing with Enhanced Distillation towards Efficient Low-bit On-device VLM Quantization

arXiv — cs.CVThursday, November 13, 2025 at 5:00:00 AM
SPEED-Q represents a significant advancement in the deployment of Vision-Language Models (VLMs) on edge devices, which is essential for low-latency and privacy-preserving applications. The framework tackles two major challenges: the differences in quantization sensitivity between the vision and language components of VLMs and the instability in training caused by low-bit quantization. By introducing a staged sensitivity adaptive mechanism, SPEED-Q harmonizes performance across these modalities, ensuring that VLMs can be effectively quantized for devices with limited resources. This approach not only improves memory efficiency and reduces bandwidth requirements but also stabilizes the training process, making it the first framework specifically designed for quantizing small-scale billion-parameter VLMs. The implications of this research are profound, as it paves the way for more sophisticated AI applications on everyday devices, enhancing user experience while maintaining privacy and ef…
— via World Pulse Now AI Editorial System

Was this article worth reading? Share it

Recommended Readings
Artificial neuron can mimic different parts of the brain—a major step toward human-like robotics
PositiveArtificial Intelligence
Scientists have developed an artificial neuron that can mimic various parts of the brain, marking a significant advancement toward creating robots that can sense and respond to their environment like humans. This innovation could pave the way for more sophisticated human-like robotics, enhancing the interaction between machines and their surroundings.
ExPairT-LLM: Exact Learning for LLM Code Selection by Pairwise Queries
PositiveArtificial Intelligence
ExPairT-LLM is introduced as an exact learning algorithm aimed at improving code selection from multiple outputs generated by large language models (LLMs). Traditional code selection algorithms often struggle to identify the correct program due to misidentification of nonequivalent programs or reliance on LLMs that may not always provide accurate outputs. ExPairT-LLM addresses these issues by utilizing pairwise membership and pairwise equivalence queries, enhancing the accuracy of program selection. Evaluations show a significant improvement in success rates over existing algorithms.
Benchmarking Retrieval-Augmented Large Language Models in Biomedical NLP: Application, Robustness, and Self-Awareness
NeutralArtificial Intelligence
The paper titled 'Benchmarking Retrieval-Augmented Large Language Models in Biomedical NLP: Application, Robustness, and Self-Awareness' discusses the capabilities of large language models (LLMs) in biomedical natural language processing (NLP) tasks. It highlights the sensitivity of LLMs to demonstration selection and addresses the hallucination issue through retrieval-augmented LLMs (RAL). However, there is a lack of rigorous evaluation of RAL's impact on various biomedical NLP tasks, which complicates understanding its capabilities in this domain.
Bridging Hidden States in Vision-Language Models
PositiveArtificial Intelligence
Vision-Language Models (VLMs) are emerging models that integrate visual content with natural language. Current methods typically fuse data either early in the encoding process or late through pooled embeddings. This paper introduces a lightweight fusion module utilizing cross-only, bidirectional attention layers to align hidden states from both modalities, enhancing understanding while keeping encoders non-causal. The proposed method aims to improve the performance of VLMs by leveraging the inherent structure of visual and textual data.
FastDriveVLA: Efficient End-to-End Driving via Plug-and-Play Reconstruction-based Token Pruning
PositiveArtificial Intelligence
FastDriveVLA is a novel framework designed for efficient end-to-end autonomous driving through a reconstruction-based visual token pruning method. This approach addresses the high computational costs associated with long visual tokens in Vision-Language-Action (VLA) models. By focusing on retaining visual tokens that contain essential foreground information, FastDriveVLA aims to enhance decision-making in driving scenarios, marking a significant advancement in the application of VLA models in autonomous systems.
Human-Corrected Labels Learning: Enhancing Labels Quality via Human Correction of VLMs Discrepancies
PositiveArtificial Intelligence
The article discusses the introduction of Human-Corrected Labels (HCLs) to improve the quality of labels generated by Vision-Language Models (VLMs). It highlights the issues of low-quality labels and the lack of error correction in VLM outputs. The proposed method involves human intervention to correct discrepancies in VLM-generated labels, leading to enhanced annotation quality and reduced labor costs, supported by extensive experimental results.
Collaborative Representation Learning for Alignment of Tactile, Language, and Vision Modalities
PositiveArtificial Intelligence
The article presents TLV-CoRe, a new method for collaborative representation learning that integrates tactile, language, and vision modalities. It addresses the challenges of existing tactile sensors, which often lack standardization and hinder cross-sensor generalization. TLV-CoRe introduces a Sensor-Aware Modulator to unify tactile features and employs decoupled learning to enhance the integration of these modalities, alongside a new evaluation framework called RSS to assess the effectiveness of tactile models.