GuardFed: A Trustworthy Federated Learning Framework Against Dual-Facet Attacks

arXiv — cs.LGThursday, November 13, 2025 at 5:00:00 AM
GuardFed represents a crucial advancement in federated learning, a method that allows for collaborative model training while preserving privacy. However, federated learning faces significant threats, particularly from Dual-Facet Attacks (DFA), which compromise both the accuracy and fairness of models. The introduction of DFA, along with its variants Synchronous DFA and Split DFA, underscores the complexity of these threats, which existing defenses have struggled to counter effectively. GuardFed addresses this gap by implementing a self-adaptive defense framework that leverages a small amount of clean server data and synthetic samples to maintain a fairness-aware reference model. Through extensive experiments, GuardFed has demonstrated its ability to preserve both accuracy and fairness under diverse conditions, marking a significant step forward in protecting federated learning systems from adversarial attacks.
— via World Pulse Now AI Editorial System

Was this article worth reading? Share it

Recommended Readings
Accuracy is Not Enough: Poisoning Interpretability in Federated Learning via Color Skew
NegativeArtificial Intelligence
Recent research highlights a new class of attacks in federated learning that compromise model interpretability without impacting accuracy. The study reveals that adversarial clients can apply small color perturbations, shifting a model's saliency maps from meaningful regions while maintaining predictions. This method, termed the Chromatic Perturbation Module, systematically creates adversarial examples by altering color contrasts, leading to persistent poisoning of the model's internal feature attributions, challenging assumptions about model reliability.
Optimal Look-back Horizon for Time Series Forecasting in Federated Learning
NeutralArtificial Intelligence
Selecting an appropriate look-back horizon is a key challenge in time series forecasting (TSF), especially in federated learning contexts where data is decentralized and heterogeneous. This paper proposes a framework for adaptive horizon selection in federated TSF using an intrinsic space formulation. It introduces a synthetic data generator that captures essential temporal structures in client data, such as autoregressive dependencies and seasonality, while considering client-specific variations.