FedALT: Federated Fine-Tuning through Adaptive Local Training with Rest-of-World LoRA

arXiv — cs.CLMonday, November 17, 2025 at 5:00:00 AM
  • FedALT is introduced as a novel personalized federated LoRA fine
  • The development of FedALT is significant as it represents a shift from traditional aggregation methods, potentially leading to better performance in natural language processing tasks. This could enhance privacy
— via World Pulse Now AI Editorial System

Was this article worth reading? Share it

Recommended Readings
Playmate2: Training-Free Multi-Character Audio-Driven Animation via Diffusion Transformer with Reward Feedback
PositiveArtificial Intelligence
Recent advancements in diffusion models have led to significant improvements in audio-driven human video generation, outperforming traditional techniques in quality and controllability. However, challenges remain in achieving lip-sync accuracy, maintaining temporal coherence in long videos, and creating multi-character animations. The proposed framework utilizes a diffusion transformer (DiT) to generate realistic talking videos of any length without the need for training. It incorporates a LoRA-based strategy and a position shift inference method, enhancing lip synchronization and natural body…
MoETTA: Test-Time Adaptation Under Mixed Distribution Shifts with MoE-LayerNorm
PositiveArtificial Intelligence
MoETTA is a novel test-time adaptation (TTA) framework designed to address performance drops during mixed distribution shifts in machine learning. Traditional TTA methods struggle with diverse domain factors that can conflict, leading to suboptimal results. MoETTA leverages an entropy-based approach and the Mixture-of-Experts (MoE) architecture to allow for varied gradient directions across domains, enhancing adaptability during inference. This framework aims to improve performance in real-world applications where data distribution is often heterogeneous.
FAPE-IR: Frequency-Aware Planning and Execution Framework for All-in-One Image Restoration
PositiveArtificial Intelligence
FAPE-IR introduces a Frequency-Aware Planning and Execution framework for All-in-One Image Restoration (AIO-IR), designed to address multiple image degradations in complex conditions. Unlike existing methods that depend on task-specific designs, FAPE-IR utilizes a frozen Multimodal Large Language Model (MLLM) to analyze degraded images and create frequency-aware restoration plans. These plans guide a LoRA-based Mixture-of-Experts (LoRA-MoE) module, which dynamically selects experts based on the frequency features of the input image, enhancing restoration quality through adversarial training an…
YOLO Meets Mixture-of-Experts: Adaptive Expert Routing for Robust Object Detection
PositiveArtificial Intelligence
The paper introduces a new Mixture-of-Experts framework for object detection, which utilizes adaptive routing among multiple YOLOv9-T experts. This approach allows for dynamic feature specialization, resulting in improved performance metrics, specifically higher mean Average Precision (mAP) and Average Recall (AR) compared to using a single YOLOv9-T model. The findings suggest significant advancements in the field of object detection.