pFedBBN: A Personalized Federated Test-Time Adaptation with Balanced Batch Normalization for Class-Imbalanced Data

arXiv — cs.LGTuesday, November 25, 2025 at 5:00:00 AM
  • The introduction of pFedBBN, a personalized federated test-time adaptation framework, addresses the critical challenge of class imbalance in federated learning. This framework utilizes balanced batch normalization to enhance local client adaptation, particularly in scenarios with unseen data distributions and domain shifts.
  • This development is significant as it provides a solution to the limitations of existing methods that require labeled data or client coordination, thereby improving the adaptability of federated learning systems in real-world applications.
  • The ongoing challenges in federated learning, such as client heterogeneity and the need for personalized fine-tuning, highlight the importance of frameworks like pFedBBN. These innovations aim to balance global model performance with local data characteristics, addressing the persistent issues of data privacy and model robustness in decentralized environments.
— via World Pulse Now AI Editorial System

Was this article worth reading? Share it

Recommended apps based on your readingExplore all apps
Continue Readings
One-Shot Federated Ridge Regression: Exact Recovery via Sufficient Statistic Aggregation
NeutralArtificial Intelligence
A recent study introduces a novel approach to federated ridge regression, demonstrating that iterative communication between clients and a central server is unnecessary for achieving exact recovery of the centralized solution. By aggregating sufficient statistics from clients in a single transmission, the server can reconstruct the global solution through matrix inversion, significantly reducing communication overhead.
Attacks on fairness in Federated Learning
NegativeArtificial Intelligence
Recent research highlights a new type of attack on Federated Learning (FL) that compromises the fairness of trained models, revealing that controlling just one client can skew performance distributions across various attributes. This raises concerns about the integrity of models in sensitive applications where fairness is critical.

Ready to build your own newsroom?

Subscribe to unlock a personalised feed, podcasts, newsletters, and notifications tailored to the topics you actually care about