GuardFed: A Trustworthy Federated Learning Framework Against Dual-Facet Attacks

arXiv — cs.LGThursday, November 13, 2025 at 5:00:00 AM
GuardFed represents a crucial advancement in federated learning, a method that allows for collaborative model training while preserving privacy. However, federated learning faces significant threats, particularly from Dual-Facet Attacks (DFA), which compromise both the accuracy and fairness of models. The introduction of DFA, along with its variants Synchronous DFA and Split DFA, underscores the complexity of these threats, which existing defenses have struggled to counter effectively. GuardFed addresses this gap by implementing a self-adaptive defense framework that leverages a small amount of clean server data and synthetic samples to maintain a fairness-aware reference model. Through extensive experiments, GuardFed has demonstrated its ability to preserve both accuracy and fairness under diverse conditions, marking a significant step forward in protecting federated learning systems from adversarial attacks.
— via World Pulse Now AI Editorial System

Was this article worth reading? Share it

Recommended apps based on your readingExplore all apps
Continue Readings
Accelerated Methods with Complexity Separation Under Data Similarity for Federated Learning Problems
NeutralArtificial Intelligence
A recent study has formalized the challenges posed by heterogeneity in data distribution within federated learning tasks as an optimization problem, proposing several communication-efficient methods and an optimal algorithm for the convex case. The theory has been validated through experiments across various problems.

Ready to build your own newsroom?

Subscribe to unlock a personalised feed, podcasts, newsletters, and notifications tailored to the topics you actually care about