Erase to Retain: Low Rank Adaptation Guided Selective Unlearning in Medical Segmentation Networks

arXiv — cs.CVFriday, November 21, 2025 at 5:00:00 AM
  • The 'Erase to Retain' framework offers a novel approach to selectively unlearn knowledge in medical segmentation networks, addressing the growing need for privacy compliance and ethical data handling.
  • This development is significant as it allows medical professionals to manage sensitive information effectively, ensuring that patient privacy is upheld while still benefiting from advanced imaging technologies.
  • The integration of Low
— via World Pulse Now AI Editorial System

Was this article worth reading? Share it

Continue Readings
Mixture of Ranks with Degradation-Aware Routing for One-Step Real-World Image Super-Resolution
PositiveArtificial Intelligence
The study presents a novel Mixture-of-Ranks (MoR) architecture for real-world image super-resolution (Real-ISR), integrating sparse Mixture-of-Experts (MoE) into existing frameworks. This approach aims to enhance the adaptability of models in capturing the diverse characteristics of degraded images while facilitating knowledge sharing among inputs. The proposed method utilizes a fine-grained expert partitioning strategy, treating each rank in Low-Rank Adaptation (LoRA) as an independent expert.
ILoRA: Federated Learning with Low-Rank Adaptation for Heterogeneous Client Aggregation
PositiveArtificial Intelligence
ILoRA, or Federated Learning with Low-Rank Adaptation, addresses three significant challenges in client heterogeneity: initialization instability, rank incompatibility, and client drift under non-IID data. The proposed framework integrates a QR-based initialization, a concatenated QR aggregation mechanism, and an AdamW optimizer with rank-aware control variates. These innovations aim to enhance the stability and performance of federated learning models across diverse client environments.
LoRA on the Go: Instance-level Dynamic LoRA Selection and Merging
PositiveArtificial Intelligence
LoRA on the Go (LoGo) introduces a training-free framework for dynamic selection and merging of Low-Rank Adaptation (LoRA) adapters at the instance level. This approach addresses the limitations of conventional LoRA adapters, which are typically trained for single tasks. By leveraging signals from a single forward pass, LoGo identifies the most relevant adapters for diverse tasks, enhancing performance across multiple NLP benchmarks without the need for additional labeled data.
HAWAII: Hierarchical Visual Knowledge Transfer for Efficient Vision-Language Models
PositiveArtificial Intelligence
HAWAII is a proposed framework aimed at enhancing the efficiency of vision-language models (VLMs) by distilling knowledge from multiple visual experts into a single vision encoder. This approach minimizes computational costs while retaining the strengths of various experts. The framework employs teacher-specific Low-Rank Adaptation (LoRA) adapters to manage knowledge transfer effectively, reducing conflicts and improving performance in visual understanding tasks.