Improving Adversarial Transferability with Neighbourhood Gradient Information
PositiveArtificial Intelligence
The study titled 'Improving Adversarial Transferability with Neighbourhood Gradient Information' presents a novel approach to enhance the transferability of adversarial examples in deep neural networks (DNNs), which are known to be susceptible to such attacks. The proposed NGI-Attack leverages Neighbourhood Gradient Information (NGI) through techniques like Example Backtracking and Multiplex Mask to improve the attack performance significantly. This method not only accumulates gradient information effectively but also forces the network to focus on non-discriminative regions, leading to richer gradient insights. Extensive experiments validate the approach, achieving an impressive attack success rate of 95.2%. This research is particularly relevant as it narrows the performance gap between surrogate and target models in black-box attack scenarios, highlighting the importance of enhancing DNN robustness against adversarial threats. The ability to integrate this method with existing algor…
— via World Pulse Now AI Editorial System
