Breaking the Stealth-Potency Trade-off in Clean-Image Backdoors with Generative Trigger Optimization
PositiveArtificial Intelligence
The introduction of Generative Clean-Image Backdoors (GCB) marks a significant advancement in the realm of clean-image backdoor attacks, which traditionally compromise deep neural networks through label manipulation. Existing methods often lead to a noticeable drop in Clean Accuracy (CA), undermining their stealthiness. GCB addresses this flaw by optimizing the trigger itself, achieving a CA drop of less than 1% while allowing the model to learn from a minimal set of poisoned examples. This innovative framework has shown remarkable versatility, successfully adapting to six datasets, five architectures, and four tasks, including the first demonstration of clean-image backdoors in regression and segmentation. Furthermore, GCB exhibits resilience against most existing backdoor defenses, highlighting its potential impact on security-critical applications in AI.
— via World Pulse Now AI Editorial System