FedPoisonTTP: A Threat Model and Poisoning Attack for Federated Test-Time Personalization

arXiv — cs.CVTuesday, November 25, 2025 at 5:00:00 AM
  • A new framework called FedPoisonTTP has been introduced to address security vulnerabilities in federated learning during test-time personalization. This framework highlights how compromised participants can exploit local adaptations to submit poisoned inputs, which can degrade both global and individual model performance. The framework synthesizes adversarial updates to create high-entropy or class-confident poisons, posing significant risks to the integrity of federated learning systems.
  • The emergence of FedPoisonTTP is critical as it underscores the need for robust defenses against adversarial attacks in federated learning. As models increasingly adapt to local domain shifts, the potential for malicious interference grows, threatening the reliability and effectiveness of personalized AI applications. This development calls for heightened awareness and improved security measures within federated learning environments.
  • The introduction of FedPoisonTTP reflects a broader concern regarding the security of federated learning frameworks, particularly in light of recent advancements aimed at enhancing data privacy and model robustness. As federated learning continues to evolve, the balance between personalization and security becomes increasingly complex, with various approaches emerging to mitigate risks associated with model poisoning and adversarial attacks. This ongoing dialogue highlights the importance of developing comprehensive strategies to safeguard against vulnerabilities while maintaining the benefits of collaborative learning.
— via World Pulse Now AI Editorial System

Was this article worth reading? Share it

Recommended apps based on your readingExplore all apps
Continue Readings
Merging without Forgetting: Continual Fusion of Task-Specific Models via Optimal Transport
PositiveArtificial Intelligence
A novel model merging framework called OTMF (Optimal Transport-based Masked Fusion) has been introduced to address the challenges of merging task-specific models without losing their unique identities. This approach leverages optimal transport theory to align the semantic geometry of different models, thereby preserving task-specific knowledge while enhancing multi-task system efficiency.
CycleSL: Server-Client Cyclical Update Driven Scalable Split Learning
PositiveArtificial Intelligence
CycleSL has been introduced as a new framework for scalable split learning, addressing the limitations of existing methods by eliminating the need for aggregation and enhancing performance. This approach allows for improved collaboration in distributed model training without the exchange of raw data, thereby maintaining data privacy.