FedSDWC: Federated Synergistic Dual-Representation Weak Causal Learning for OOD

arXiv — cs.LGThursday, November 13, 2025 at 5:00:00 AM
FedSDWC has been proposed as a solution to the challenges faced by federated learning (FL), particularly the issues arising from differences in data distribution that affect reliability. This new causal inference method integrates both invariant and variant features, which allows it to effectively capture causal representations and enhance FL's generalization capabilities. Extensive experiments demonstrate that FedSDWC outperforms existing methods, such as FedICON, by notable margins on benchmark datasets like CIFAR-10 and CIFAR-100. The theoretical foundation of FedSDWC includes a derived generalization error bound under specific conditions, establishing its relationship with client prior distributions. This advancement is significant as it not only improves the performance of FL but also addresses critical concerns regarding data privacy and the reliability of distributed learning systems in real-world applications.
— via World Pulse Now AI Editorial System

Was this article worth reading? Share it

Recommended Readings
Preserving Cross-Modal Consistency for CLIP-based Class-Incremental Learning
PositiveArtificial Intelligence
The paper titled 'Preserving Cross-Modal Consistency for CLIP-based Class-Incremental Learning' addresses the challenges of class-incremental learning (CIL) in vision-language models like CLIP. It introduces a two-stage framework called DMC, which separates the adaptation of the vision encoder from the optimization of textual soft prompts. This approach aims to mitigate classifier bias and maintain cross-modal alignment, enhancing the model's ability to learn new categories without forgetting previously acquired knowledge.
Divide, Conquer and Unite: Hierarchical Style-Recalibrated Prototype Alignment for Federated Medical Image Segmentation
NeutralArtificial Intelligence
The article discusses the challenges of federated learning in medical image segmentation, particularly the issue of feature heterogeneity from various scanners and protocols. It highlights two main limitations of current methods: incomplete contextual representation learning and layerwise style bias accumulation. To address these issues, the authors propose a new method called FedBCS, which aims to bridge feature representation gaps through domain-invariant contextual prototypes alignment.
Enhanced Structured Lasso Pruning with Class-wise Information
PositiveArtificial Intelligence
The paper titled 'Enhanced Structured Lasso Pruning with Class-wise Information' discusses advancements in neural network pruning methods. Traditional pruning techniques often overlook class-wise information, leading to potential loss of statistical data. This study introduces two new pruning schemes, sparse graph-structured lasso pruning with Information Bottleneck (sGLP-IB) and sparse tree-guided lasso pruning with Information Bottleneck (sTLP-IB), aimed at preserving statistical information while reducing model complexity.
AMUN: Adversarial Machine UNlearning
PositiveArtificial Intelligence
The paper titled 'AMUN: Adversarial Machine UNlearning' discusses a novel method for machine unlearning, which allows users to delete specific datasets to comply with privacy regulations. Traditional exact unlearning methods require significant computational resources, while approximate methods have not achieved satisfactory accuracy. The proposed Adversarial Machine UNlearning (AMUN) technique enhances model performance by fine-tuning on adversarial examples, effectively reducing model confidence on forgotten samples while maintaining accuracy on test datasets.
Orthogonal Soft Pruning for Efficient Class Unlearning
PositiveArtificial Intelligence
The article discusses FedOrtho, a federated unlearning framework designed to enhance data unlearning in federated learning environments. It addresses the challenges of balancing forgetting and retention, particularly in non-IID settings. FedOrtho employs orthogonalized deep convolutional kernels and a one-shot soft pruning mechanism, achieving state-of-the-art performance on datasets like CIFAR-10 and TinyImageNet, with over 98% forgetting quality and 97% retention accuracy.
UHKD: A Unified Framework for Heterogeneous Knowledge Distillation via Frequency-Domain Representations
PositiveArtificial Intelligence
Unified Heterogeneous Knowledge Distillation (UHKD) is a proposed framework that enhances knowledge distillation (KD) by utilizing intermediate features in the frequency domain. This approach addresses the limitations of traditional KD methods, which are primarily designed for homogeneous models and struggle in heterogeneous environments. UHKD aims to improve model compression while maintaining accuracy, making it a significant advancement in the field of artificial intelligence.
When to Stop Federated Learning: Zero-Shot Generation of Synthetic Validation Data with Generative AI for Early Stopping
PositiveArtificial Intelligence
Federated Learning (FL) allows collaborative model training across decentralized devices while ensuring data privacy. Traditional FL methods often run for a set number of global rounds, which can lead to unnecessary computations when optimal performance is achieved earlier. To improve efficiency, a new zero-shot synthetic validation framework using generative AI has been introduced to monitor model performance and determine early stopping points, potentially reducing training rounds by up to 74% while maintaining accuracy within 1% of the optimal.
On the Necessity of Output Distribution Reweighting for Effective Class Unlearning
PositiveArtificial Intelligence
The paper titled 'On the Necessity of Output Distribution Reweighting for Effective Class Unlearning' identifies a critical flaw in class unlearning evaluations, specifically the neglect of class geometry, which can lead to privacy breaches. It introduces a membership-inference attack via nearest neighbors (MIA-NN) to identify unlearned samples. The authors propose a new fine-tuning objective that adjusts the model's output distribution to mitigate privacy risks, demonstrating that existing unlearning methods are susceptible to MIA-NN across various datasets.