Fully Decentralized Certified Unlearning

arXiv — cs.LGWednesday, December 10, 2025 at 5:00:00 AM
  • A recent study has introduced a method for fully decentralized certified unlearning in machine learning, focusing on the removal of specific data influences from trained models without a central coordinator. This approach, termed RR-DU, employs a random-walk procedure to enhance privacy and mitigate data poisoning risks, providing convergence guarantees in convex scenarios and stationarity in nonconvex cases.
  • This development is significant as it addresses the growing need for privacy-preserving techniques in machine learning, particularly in decentralized environments where data security and user privacy are paramount. The ability to effectively unlearn data influences can enhance trust in AI systems and comply with privacy regulations.
  • The advancement of decentralized unlearning techniques reflects a broader trend in AI towards more robust privacy measures, paralleling discussions on the limitations of existing unlearning methods and the challenges posed by noisy labels and class ambiguity in deep learning. As machine unlearning evolves, it raises important questions about the efficacy of current frameworks and the need for innovative solutions to ensure data integrity and user rights.
— via World Pulse Now AI Editorial System

Was this article worth reading? Share it

Recommended apps based on your readingExplore all apps
Continue Readings
DASH: A Meta-Attack Framework for Synthesizing Effective and Stealthy Adversarial Examples
PositiveArtificial Intelligence
The introduction of DAASH, a meta-attack framework, marks a significant advancement in generating effective and perceptually aligned adversarial examples, addressing the limitations of traditional Lp-norm constrained methods. This framework strategically composes existing attack methods in a multi-stage process, enhancing the perceptual alignment of adversarial examples.
Oscillations Make Neural Networks Robust to Quantization
PositiveArtificial Intelligence
Recent research challenges the notion that weight oscillations during Quantization Aware Training (QAT) are merely undesirable effects, proposing instead that they are crucial for enhancing the robustness of neural networks. The study demonstrates that these oscillations, induced by a new regularizer, can help maintain performance across various quantization levels, particularly in models like ResNet-18 and Tiny Vision Transformer evaluated on CIFAR-10 and Tiny ImageNet datasets.
Learning effective pruning at initialization from iterative pruning
PositiveArtificial Intelligence
A recent study explores the potential of pruning at initialization (PaI) by drawing inspiration from iterative pruning methods, aiming to enhance performance in deep learning models. The research highlights the significance of identifying surviving subnetworks based on initial features, which could lead to more efficient pruning strategies and reduced training costs, especially as neural networks grow in size.
Conditional Morphogenesis: Emergent Generation of Structural Digits via Neural Cellular Automata
PositiveArtificial Intelligence
A novel Conditional Neural Cellular Automata (c-NCA) architecture has been proposed, enabling the generation of distinct topological structures, specifically MNIST digits, from a single seed. This approach emphasizes local interactions and translation equivariance, diverging from traditional generative models that rely on global reception fields.
Discovering Influential Factors in Variational Autoencoders
NeutralArtificial Intelligence
A recent study has focused on the influential factors extracted by variational autoencoders (VAEs), highlighting the challenge of supervising learned representations without manual intervention. The research emphasizes the role of mutual information between inputs and learned factors as a key indicator for identifying influential factors, revealing that some factors may be non-influential and can be disregarded in data reconstruction.
Nonlinear Optimization with GPU-Accelerated Neural Network Constraints
NeutralArtificial Intelligence
A new reduced-space formulation for optimizing trained neural networks has been proposed, which evaluates the network's outputs and derivatives on a GPU. This method treats the neural network as a 'gray box,' leading to faster solves and fewer iterations compared to traditional full-space formulations. The approach has been demonstrated on two optimization problems, including adversarial generation for a classifier trained on MNIST images.
Quantization Blindspots: How Model Compression Breaks Backdoor Defenses
NeutralArtificial Intelligence
A recent study highlights the vulnerabilities of backdoor defenses in neural networks when subjected to post-training quantization, revealing that INT8 quantization leads to a 0% detection rate for all evaluated defenses while attack success rates remain above 99%. This raises concerns about the effectiveness of existing security measures in machine learning systems.
PrunedCaps: A Case For Primary Capsules Discrimination
PositiveArtificial Intelligence
A recent study has introduced a pruned version of Capsule Networks (CapsNets), demonstrating that it can operate up to 9.90 times faster than traditional architectures by eliminating 95% of Primary Capsules while maintaining accuracy across various datasets, including MNIST and CIFAR-10.