Improving Adversarial Transferability with Neighbourhood Gradient Information

arXiv — cs.CVThursday, November 13, 2025 at 5:00:00 AM
The study titled 'Improving Adversarial Transferability with Neighbourhood Gradient Information' presents a novel approach to enhance the transferability of adversarial examples in deep neural networks (DNNs), which are known to be susceptible to such attacks. The proposed NGI-Attack leverages Neighbourhood Gradient Information (NGI) through techniques like Example Backtracking and Multiplex Mask to improve the attack performance significantly. This method not only accumulates gradient information effectively but also forces the network to focus on non-discriminative regions, leading to richer gradient insights. Extensive experiments validate the approach, achieving an impressive attack success rate of 95.2%. This research is particularly relevant as it narrows the performance gap between surrogate and target models in black-box attack scenarios, highlighting the importance of enhancing DNN robustness against adversarial threats. The ability to integrate this method with existing algor…
— via World Pulse Now AI Editorial System

Was this article worth reading? Share it

Recommended Readings
Revisiting Data Scaling Law for Medical Segmentation
PositiveArtificial Intelligence
The study explores the scaling laws of deep neural networks in medical anatomical segmentation, revealing that larger training datasets lead to improved performance across various semantic tasks and imaging modalities. It highlights the significance of deformation-guided augmentation strategies, such as random elastic deformation and registration-guided deformation, in enhancing segmentation outcomes. The research aims to address the underexplored area of data scaling in medical imaging, proposing a novel image augmentation approach to generate diffeomorphic mappings.
FQ-PETR: Fully Quantized Position Embedding Transformation for Multi-View 3D Object Detection
PositiveArtificial Intelligence
The paper titled 'FQ-PETR: Fully Quantized Position Embedding Transformation for Multi-View 3D Object Detection' addresses the challenges of deploying PETR models in autonomous driving due to their high computational costs and memory requirements. It introduces FQ-PETR, a fully quantized framework that aims to enhance efficiency without sacrificing accuracy. Key innovations include a Quantization-Friendly LiDAR-ray Position Embedding and techniques to mitigate accuracy degradation typically associated with quantization methods.
On the Relationship Between Adversarial Robustness and Decision Region in Deep Neural Networks
PositiveArtificial Intelligence
The article discusses the evaluation of Deep Neural Networks (DNNs) based on their generalization performance and robustness against adversarial attacks. It highlights the challenges in assessing DNNs solely through generalization metrics as their performance has reached state-of-the-art levels. The study introduces the concept of the Populated Region Set (PRS) to analyze the internal properties of DNNs that influence their robustness, revealing that a low PRS ratio correlates with improved adversarial robustness.