SPOOF: Simple Pixel Operations for Out-of-Distribution Fooling
NeutralArtificial Intelligence
- A new study titled 'SPOOF: Simple Pixel Operations for Out-of-Distribution Fooling' reveals that deep neural networks (DNNs) continue to show overconfidence in misclassifying inputs that do not resemble natural images. The research revisits fooling images and confirms that modern architectures, particularly the transformer-based ViT-B/16, are highly susceptible to misclassifications with fewer queries compared to convolution-based models.
- The introduction of SPOOF, a minimalist black-box attack that generates high-confidence fooling images with minimal pixel modifications, highlights the ongoing vulnerabilities in AI systems. This development raises concerns about the reliability of DNNs in real-world applications, emphasizing the need for improved robustness against such attacks.
— via World Pulse Now AI Editorial System
