FedPM: Federated Learning Using Second-order Optimization with Preconditioned Mixing of Local Parameters

arXiv — cs.LGThursday, November 13, 2025 at 5:00:00 AM
The introduction of Federated Preconditioned Mixing (FedPM) marks a significant advancement in Federated Learning (FL), addressing critical issues faced by prior methods like LocalNewton, LTDA, and FedSophia, which struggled with drift in local preconditioners that disrupted convergence. By refining update rules and implementing preconditioned mixing of local parameters on the server, FedPM effectively mitigates these issues, resulting in improved test accuracy. The theoretical convergence analysis indicates a superlinear rate for strongly convex objectives, showcasing the method's potential in heterogeneous data settings. Extensive experiments have validated these claims, demonstrating significant improvements in performance compared to conventional methods. This development is crucial as it enhances the reliability and efficiency of FL, which is increasingly important in various applications where data privacy and decentralized learning are paramount.
— via World Pulse Now AI Editorial System

Was this article worth reading? Share it

Recommended Readings
Accuracy is Not Enough: Poisoning Interpretability in Federated Learning via Color Skew
NegativeArtificial Intelligence
Recent research highlights a new class of attacks in federated learning that compromise model interpretability without impacting accuracy. The study reveals that adversarial clients can apply small color perturbations, shifting a model's saliency maps from meaningful regions while maintaining predictions. This method, termed the Chromatic Perturbation Module, systematically creates adversarial examples by altering color contrasts, leading to persistent poisoning of the model's internal feature attributions, challenging assumptions about model reliability.
Optimal Look-back Horizon for Time Series Forecasting in Federated Learning
NeutralArtificial Intelligence
Selecting an appropriate look-back horizon is a key challenge in time series forecasting (TSF), especially in federated learning contexts where data is decentralized and heterogeneous. This paper proposes a framework for adaptive horizon selection in federated TSF using an intrinsic space formulation. It introduces a synthetic data generator that captures essential temporal structures in client data, such as autoregressive dependencies and seasonality, while considering client-specific variations.