BD-Net: Has Depth-Wise Convolution Ever Been Applied in Binary Neural Networks?

arXiv — cs.CVTuesday, November 25, 2025 at 5:00:00 AM
  • A recent study introduces BD-Net, which successfully applies depth-wise convolution in Binary Neural Networks (BNNs) by proposing a 1.58-bit convolution and a pre-BN residual connection to enhance expressiveness and stabilize training. This innovation marks a significant advancement in model compression techniques, achieving a new state-of-the-art performance on ImageNet with MobileNet V1 and outperforming previous methods across various datasets.
  • The development of BD-Net is crucial as it addresses the limitations of extreme quantization in BNNs, which often destabilizes training and reduces representational capacity. By enhancing the optimization process, this approach not only improves the efficiency of lightweight architectures but also opens new avenues for deploying BNNs in resource-constrained environments.
  • This advancement in BNNs aligns with ongoing efforts in the field of artificial intelligence to create more efficient neural network architectures. Techniques such as structured pruning and adaptive fine-tuning are gaining traction, highlighting a broader trend towards optimizing model performance while maintaining low computational costs. The integration of various pruning strategies and quantization methods reflects a growing recognition of the need for efficient AI solutions in diverse applications.
— via World Pulse Now AI Editorial System

Was this article worth reading? Share it

Recommended apps based on your readingExplore all apps
Continue Readings
A Highly Efficient Diversity-based Input Selection for DNN Improvement Using VLMs
PositiveArtificial Intelligence
A recent study has introduced Concept-Based Diversity (CBD), a highly efficient metric for image inputs that utilizes Vision-Language Models (VLMs) to enhance the performance of Deep Neural Networks (DNNs) through improved input selection. This approach addresses the computational intensity and scalability issues associated with traditional diversity-based selection methods.
NOVAK: Unified adaptive optimizer for deep neural networks
PositiveArtificial Intelligence
The recent introduction of NOVAK, a unified adaptive optimizer for deep neural networks, combines several advanced techniques including adaptive moment estimation and lookahead synchronization, aiming to enhance the performance and efficiency of neural network training.
When Models Know When They Do Not Know: Calibration, Cascading, and Cleaning
PositiveArtificial Intelligence
A recent study titled 'When Models Know When They Do Not Know: Calibration, Cascading, and Cleaning' proposes a universal training-free method for model calibration, cascading, and data cleaning, enhancing models' ability to recognize their limitations. The research highlights that higher confidence correlates with higher accuracy and that models calibrated on validation sets maintain their calibration on test sets.
Hierarchical Online-Scheduling for Energy-Efficient Split Inference with Progressive Transmission
PositiveArtificial Intelligence
A novel framework named ENACHI has been proposed for hierarchical online scheduling in energy-efficient split inference with Deep Neural Networks (DNNs), addressing the inefficiencies in current scheduling methods that fail to optimize both task-level decisions and packet-level dynamics. This framework integrates a two-tier Lyapunov-based approach and progressive transmission techniques to enhance adaptivity and resource utilization.
The Role of Noisy Data in Improving CNN Robustness for Image Classification
PositiveArtificial Intelligence
A recent study highlights the importance of data quality in enhancing the robustness of convolutional neural networks (CNNs) for image classification, specifically through the introduction of controlled noise during training. Utilizing the CIFAR-10 dataset, the research demonstrates that incorporating just 10% noisy data can significantly reduce test loss and improve accuracy under corrupted conditions without adversely affecting performance on clean data.
IGAN: A New Inception-based Model for Stable and High-Fidelity Image Synthesis Using Generative Adversarial Networks
PositiveArtificial Intelligence
A new model called Inception Generative Adversarial Network (IGAN) has been introduced, addressing the challenges of high-quality image synthesis and training stability in Generative Adversarial Networks (GANs). The IGAN model utilizes deeper inception-inspired and dilated convolutions, achieving significant improvements in image fidelity with a Frechet Inception Distance (FID) of 13.12 and 15.08 on the CUB-200 and ImageNet datasets, respectively.
Closed-Loop LLM Discovery of Non-Standard Channel Priors in Vision Models
PositiveArtificial Intelligence
A recent study has introduced a closed-loop framework for Neural Architecture Search (NAS) utilizing Large Language Models (LLMs) to optimize channel configurations in vision models. This approach addresses the combinatorial challenges of layer specifications in deep neural networks by leveraging LLMs to generate and refine architectural designs based on performance data.
A Preliminary Agentic Framework for Matrix Deflation
PositiveArtificial Intelligence
A new framework for matrix deflation has been proposed, utilizing an agentic approach where a Large Language Model (LLM) generates rank-1 Singular Value Decomposition (SVD) updates, while a Vision Language Model (VLM) evaluates these updates, enhancing solver stability through in-context learning and strategic permutations. This method was tested on various matrices, demonstrating promising results in noise reduction and accuracy.

Ready to build your own newsroom?

Subscribe to unlock a personalised feed, podcasts, newsletters, and notifications tailored to the topics you actually care about