Approximate Multiplier Induced Error Propagation in Deep Neural Networks

arXiv — cs.LGTuesday, December 9, 2025 at 5:00:00 AM
  • A new analytical framework has been introduced to characterize the error propagation induced by Approximate Multipliers (AxMs) in Deep Neural Networks (DNNs). This framework connects the statistical error moments of AxMs to the distortion in General Matrix Multiplication (GEMM), revealing that the multiplier mean error predominantly governs the distortion observed in DNN accuracy, particularly when evaluated on ImageNet scale networks.
  • This development is significant as it provides a mathematical basis for understanding how AxMs can reduce energy consumption in hardware accelerators without severely impacting the accuracy of DNNs. By quantifying the relationship between AxM errors and DNN performance, this research could guide future designs of energy-efficient neural network architectures.
  • The findings highlight ongoing challenges in optimizing DNNs, particularly regarding the balance between computational efficiency and accuracy. As the demand for more efficient AI models grows, innovations like mixed-precision quantization and dynamic parameter optimization are increasingly relevant. These advancements aim to enhance DNN performance while addressing the complexities introduced by various error propagation mechanisms.
— via World Pulse Now AI Editorial System

Was this article worth reading? Share it

Recommended apps based on your readingExplore all apps
Continue Readings
Enabling Validation for Robust Few-Shot Recognition
PositiveArtificial Intelligence
A recent study on Few-Shot Recognition (FSR) highlights the challenges of training Vision-Language Models (VLMs) with minimal labeled data, particularly the lack of validation data. The research proposes utilizing retrieved open data for validation, despite its out-of-distribution nature, which may degrade performance but offers a potential solution to the data scarcity issue.
Fast-ARDiff: An Entropy-informed Acceleration Framework for Continuous Space Autoregressive Generation
PositiveArtificial Intelligence
The Fast-ARDiff framework has been introduced as an innovative solution to enhance the efficiency of continuous space autoregressive generation by optimizing both autoregressive and diffusion components, thereby reducing latency in image synthesis processes. This framework employs an entropy-informed speculative strategy to improve representation alignment and integrates diffusion decoding into a unified end-to-end system.
Repulsor: Accelerating Generative Modeling with a Contrastive Memory Bank
PositiveArtificial Intelligence
A new framework named Repulsor has been introduced to enhance generative modeling by utilizing a contrastive memory bank, which eliminates the need for external encoders and addresses inefficiencies in representation learning. This method allows for a dynamic queue of negative samples, improving the training process of generative models without the overhead of pre-trained encoders.
DASH: A Meta-Attack Framework for Synthesizing Effective and Stealthy Adversarial Examples
PositiveArtificial Intelligence
The introduction of DAASH, a meta-attack framework, marks a significant advancement in generating effective and perceptually aligned adversarial examples, addressing the limitations of traditional Lp-norm constrained methods. This framework strategically composes existing attack methods in a multi-stage process, enhancing the perceptual alignment of adversarial examples.
LookWhere? Efficient Visual Recognition by Learning Where to Look and What to See from Self-Supervision
PositiveArtificial Intelligence
The LookWhere method introduces an innovative approach to visual recognition by utilizing adaptive computation, allowing for efficient processing of images without the need to fully compute high-resolution inputs. This technique involves a low-resolution selector and a high-resolution extractor that work together through self-supervised learning, enhancing the performance of vision transformers.
Intra-Class Probabilistic Embeddings for Uncertainty Estimation in Vision-Language Models
PositiveArtificial Intelligence
A new method for uncertainty estimation in vision-language models (VLMs) has been introduced, focusing on enhancing the reliability of models like CLIP. This training-free, post-hoc approach utilizes visual feature consistency to create class-specific probabilistic embeddings, enabling better detection of erroneous predictions without requiring fine-tuning or extensive training data.
Rethinking Training Dynamics in Scale-wise Autoregressive Generation
PositiveArtificial Intelligence
Recent advancements in autoregressive generative models have led to the introduction of Self-Autoregressive Refinement (SAR), which aims to improve image generation quality by addressing exposure bias and optimization complexity. The proposed Stagger-Scale Rollout (SSR) mechanism allows models to learn from their intermediate predictions, enhancing the training dynamics in scale-wise autoregressive generation.
Thermodynamic bounds on energy use in quasi-static Deep Neural Networks
NeutralArtificial Intelligence
Recent research has established thermodynamic bounds on energy consumption in quasi-static deep neural networks (DNNs), revealing that inference can occur in a thermodynamically reversible manner with minimal energy costs. This contrasts with the Landauer limit that applies to digital hardware, suggesting a new framework for understanding energy use in DNNs.