GuardFed: A Trustworthy Federated Learning Framework Against Dual-Facet Attacks
PositiveArtificial Intelligence
GuardFed represents a crucial advancement in federated learning, a method that allows for collaborative model training while preserving privacy. However, federated learning faces significant threats, particularly from Dual-Facet Attacks (DFA), which compromise both the accuracy and fairness of models. The introduction of DFA, along with its variants Synchronous DFA and Split DFA, underscores the complexity of these threats, which existing defenses have struggled to counter effectively. GuardFed addresses this gap by implementing a self-adaptive defense framework that leverages a small amount of clean server data and synthetic samples to maintain a fairness-aware reference model. Through extensive experiments, GuardFed has demonstrated its ability to preserve both accuracy and fairness under diverse conditions, marking a significant step forward in protecting federated learning systems from adversarial attacks.
— via World Pulse Now AI Editorial System
