DictPFL: Efficient and Private Federated Learning on Encrypted Gradients

arXiv — cs.LGMonday, October 27, 2025 at 4:00:00 AM
The recent introduction of DictPFL marks a significant advancement in federated learning by addressing privacy concerns associated with gradient sharing. This innovative approach utilizes homomorphic encryption to secure data aggregation while minimizing computational and communication overhead. This is crucial as it allows institutions to collaborate on model training without compromising sensitive information, making it a game-changer in the field of machine learning and data privacy.
— via World Pulse Now AI Editorial System

Was this article worth reading? Share it

Recommended apps based on your readingExplore all apps
Continue Readings
Privacy-Preserving Federated Vision Transformer Learning Leveraging Lightweight Homomorphic Encryption in Medical AI
PositiveArtificial Intelligence
A new framework for privacy-preserving federated learning has been introduced, combining Vision Transformers with lightweight homomorphic encryption to enhance histopathology classification across multiple healthcare institutions. This approach addresses the challenges posed by privacy regulations like HIPAA, which restrict direct patient data sharing, while still enabling collaborative machine learning.
CycleSL: Server-Client Cyclical Update Driven Scalable Split Learning
PositiveArtificial Intelligence
CycleSL has been introduced as a new framework for scalable split learning, addressing the limitations of existing methods by eliminating the need for aggregation and enhancing performance. This approach allows for improved collaboration in distributed model training without the exchange of raw data, thereby maintaining data privacy.
FedPoisonTTP: A Threat Model and Poisoning Attack for Federated Test-Time Personalization
NegativeArtificial Intelligence
A new framework called FedPoisonTTP has been introduced to address security vulnerabilities in federated learning during test-time personalization. This framework highlights how compromised participants can exploit local adaptations to submit poisoned inputs, which can degrade both global and individual model performance. The framework synthesizes adversarial updates to create high-entropy or class-confident poisons, posing significant risks to the integrity of federated learning systems.