Enhancing Adversarial Transferability in Visual-Language Pre-training Models via Local Shuffle and Sample-based Attack
PositiveArtificial Intelligence
A recent study highlights advancements in Visual-Language Pre-training (VLP) models, which are crucial for various tasks but have been susceptible to adversarial examples. The research introduces innovative methods to enhance adversarial transferability while addressing overfitting issues caused by limited input diversity. This is significant as it not only improves the robustness of VLP models but also paves the way for more reliable applications in real-world scenarios, ensuring that these models can perform effectively even when faced with adversarial challenges.
— Curated by the World Pulse Now AI Editorial System




