ZQBA: Zero Query Black-box Adversarial Attack

arXiv — cs.CVMonday, December 8, 2025 at 5:00:00 AM
  • The introduction of the Zero Query Black-box Adversarial (ZQBA) attack marks a significant advancement in the field of adversarial machine learning, as it allows for the generation of adversarial samples without the need for extensive querying or training surrogate models. This method utilizes feature maps from Deep Neural Networks (DNNs) to create deceptive images that can mislead target models, demonstrating effectiveness across various datasets including CIFAR and Tiny ImageNet.
  • This development is crucial as it enhances the capabilities of adversarial attacks, potentially impacting the robustness of machine learning models. By reducing the reliance on multiple queries, ZQBA opens new avenues for research and application in adversarial machine learning, making it more applicable in real-world scenarios where resources may be limited.
  • The emergence of ZQBA aligns with ongoing discussions in the AI community regarding the effectiveness of adversarial attacks and the need for improved defenses. As researchers explore various methodologies for generating adversarial examples, including multi-objective frameworks and dynamic parameter optimization, the ZQBA approach highlights the importance of efficiency and transferability in adversarial strategies, contributing to the broader discourse on model robustness and security.
— via World Pulse Now AI Editorial System

Was this article worth reading? Share it

Recommended apps based on your readingExplore all apps
Continue Readings
Oscillations Make Neural Networks Robust to Quantization
PositiveArtificial Intelligence
Recent research challenges the notion that weight oscillations during Quantization Aware Training (QAT) are merely undesirable effects, proposing instead that they are crucial for enhancing the robustness of neural networks. The study demonstrates that these oscillations, induced by a new regularizer, can help maintain performance across various quantization levels, particularly in models like ResNet-18 and Tiny Vision Transformer evaluated on CIFAR-10 and Tiny ImageNet datasets.
The Inductive Bottleneck: Data-Driven Emergence of Representational Sparsity in Vision Transformers
NeutralArtificial Intelligence
Recent research has identified an 'Inductive Bottleneck' in Vision Transformers (ViTs), where these models exhibit a U-shaped entropy profile, compressing information in middle layers before expanding it for final classification. This phenomenon is linked to the semantic abstraction required by specific tasks and is not merely an architectural flaw but a data-dependent adaptation observed across various datasets such as UC Merced, Tiny ImageNet, and CIFAR-100.
Fast and Flexible Robustness Certificates for Semantic Segmentation
PositiveArtificial Intelligence
A new class of certifiably robust Semantic Segmentation networks has been introduced, featuring built-in Lipschitz constraints that enhance their efficiency and pixel accuracy on challenging datasets like Cityscapes. This advancement addresses the vulnerability of Deep Neural Networks to small perturbations that can significantly alter predictions.
Approximate Multiplier Induced Error Propagation in Deep Neural Networks
NeutralArtificial Intelligence
A new analytical framework has been introduced to characterize the error propagation induced by Approximate Multipliers (AxMs) in Deep Neural Networks (DNNs). This framework connects the statistical error moments of AxMs to the distortion in General Matrix Multiplication (GEMM), revealing that the multiplier mean error predominantly governs the distortion observed in DNN accuracy, particularly when evaluated on ImageNet scale networks.
Thermodynamic bounds on energy use in quasi-static Deep Neural Networks
NeutralArtificial Intelligence
Recent research has established thermodynamic bounds on energy consumption in quasi-static deep neural networks (DNNs), revealing that inference can occur in a thermodynamically reversible manner with minimal energy costs. This contrasts with the Landauer limit that applies to digital hardware, suggesting a new framework for understanding energy use in DNNs.
Revolutionizing Mixed Precision Quantization: Towards Training-free Automatic Proxy Discovery via Large Language Models
PositiveArtificial Intelligence
A novel framework for Mixed-Precision Quantization (MPQ) has been introduced, leveraging Large Language Models (LLMs) to automate the discovery of training-free proxies, addressing inefficiencies in traditional methods that require expert knowledge and manual design. This innovation aims to enhance the deployment of Deep Neural Networks (DNNs) by overcoming memory limitations.