FedPoP: Federated Learning Meets Proof of Participation

arXiv — cs.LGWednesday, November 12, 2025 at 5:00:00 AM
The introduction of FedPoP marks a significant advancement in federated learning (FL), a method that allows clients to contribute to a global model while keeping their local data private. As the monetization of machine learning models grows, proving participation in their training has become essential for establishing ownership. FedPoP addresses this need by providing a nonlinkable proof of participation that maintains client anonymity without extensive computations or a public ledger. It is designed to integrate seamlessly with existing secure aggregation protocols, enhancing its applicability in real-world FL deployments. The empirical evaluation of FedPoP shows it introduces only 0.97 seconds of overhead per round and enables clients to prove their contributions in just 0.0612 seconds. These results suggest that FedPoP is not only innovative but also practical for environments requiring auditable participation while preserving privacy.
— via World Pulse Now AI Editorial System

Was this article worth reading? Share it

Recommended apps based on your readingExplore all apps
Continue Readings
Accelerated Methods with Complexity Separation Under Data Similarity for Federated Learning Problems
NeutralArtificial Intelligence
A recent study has formalized the challenges posed by heterogeneity in data distribution within federated learning tasks as an optimization problem, proposing several communication-efficient methods and an optimal algorithm for the convex case. The theory has been validated through experiments across various problems.

Ready to build your own newsroom?

Subscribe to unlock a personalised feed, podcasts, newsletters, and notifications tailored to the topics you actually care about