FAPE-IR: Frequency-Aware Planning and Execution Framework for All-in-One Image Restoration

arXiv — cs.CVWednesday, November 19, 2025 at 5:00:00 AM
  • FAPE
  • This framework represents a significant advancement in the field of image processing, as it combines semantic planning with frequency
  • The introduction of FAPE
— via World Pulse Now AI Editorial System

Was this article worth reading? Share it

Recommended Readings
Playmate2: Training-Free Multi-Character Audio-Driven Animation via Diffusion Transformer with Reward Feedback
PositiveArtificial Intelligence
Recent advancements in diffusion models have led to significant improvements in audio-driven human video generation, outperforming traditional techniques in quality and controllability. However, challenges remain in achieving lip-sync accuracy, maintaining temporal coherence in long videos, and creating multi-character animations. The proposed framework utilizes a diffusion transformer (DiT) to generate realistic talking videos of any length without the need for training. It incorporates a LoRA-based strategy and a position shift inference method, enhancing lip synchronization and natural body…
MoE-SpeQ: Speculative Quantized Decoding with Proactive Expert Prefetching and Offloading for Mixture-of-Experts
PositiveArtificial Intelligence
The paper introduces MoE-SpeQ, a novel inference system designed to address the memory limitations of Mixture-of-Experts (MoE) models during inference. Traditional methods often lead to I/O bottlenecks due to data-dependent expert selection. MoE-SpeQ mitigates this by utilizing a small on-device draft model to predict future expert requirements, allowing for proactive prefetching from host memory. This approach enhances performance by reducing the critical path of execution and improving overall efficiency in MoE applications.
YOLO Meets Mixture-of-Experts: Adaptive Expert Routing for Robust Object Detection
PositiveArtificial Intelligence
The paper introduces a new Mixture-of-Experts framework for object detection, which utilizes adaptive routing among multiple YOLOv9-T experts. This approach allows for dynamic feature specialization, resulting in improved performance metrics, specifically higher mean Average Precision (mAP) and Average Recall (AR) compared to using a single YOLOv9-T model. The findings suggest significant advancements in the field of object detection.
Watch Out for the Lifespan: Evaluating Backdoor Attacks Against Federated Model Adaptation
NeutralArtificial Intelligence
The article discusses the evaluation of backdoor attacks against federated model adaptation, particularly focusing on the impact of Parameter-Efficient Fine-Tuning techniques like Low-Rank Adaptation (LoRA). It highlights the security threats posed by backdoor attacks during local training phases and presents findings on backdoor lifespan, indicating that lower LoRA ranks can lead to longer persistence of backdoors. This research emphasizes the need for improved evaluation methods to address these vulnerabilities in Federated Learning.
FedALT: Federated Fine-Tuning through Adaptive Local Training with Rest-of-World LoRA
PositiveArtificial Intelligence
The article presents FedALT, a new algorithm for federated fine-tuning of large language models (LLMs) that addresses the challenges of cross-client interference and data heterogeneity. Traditional methods, primarily based on FedAvg, often lead to suboptimal personalization due to model aggregation issues. FedALT allows each client to continue training its individual LoRA while integrating knowledge from a separate Rest-of-World (RoW) LoRA component. This approach includes an adaptive mixer to balance local adaptation with global information effectively.