The Right to be Forgotten in Pruning: Unveil Machine Unlearning on Sparse Models

arXiv — cs.LGThursday, December 4, 2025 at 5:00:00 AM
  • A recent study introduces the concept of 'un-pruning' in machine unlearning, focusing on how deleted data influences the topology of sparse models. This new approach aims to effectively eliminate the memory of deleted data from trained models, addressing the right to be forgotten. The proposed un-pruning algorithm can be integrated with existing unlearning methods and is applicable to both structured and unstructured sparse models.
  • This development is significant as it enhances the capability of machine unlearning, which is crucial for maintaining data privacy and compliance with regulations like the GDPR. By providing a method to mitigate the effects of deleted data on model performance, it supports the ethical use of AI technologies and strengthens user trust in AI systems.
  • The exploration of sparse models in AI is gaining traction, as evidenced by ongoing research aimed at improving model transparency and governance. The intersection of machine unlearning and sparse model design highlights a growing recognition of the need for robust frameworks that ensure AI systems can adapt to data deletion requests, thereby fostering a more responsible AI landscape.
— via World Pulse Now AI Editorial System

Was this article worth reading? Share it

Recommended apps based on your readingExplore all apps
Continue Readings
Fully Decentralized Certified Unlearning
NeutralArtificial Intelligence
A recent study has introduced a method for fully decentralized certified unlearning in machine learning, focusing on the removal of specific data influences from trained models without a central coordinator. This approach, termed RR-DU, employs a random-walk procedure to enhance privacy and mitigate data poisoning risks, providing convergence guarantees in convex scenarios and stationarity in nonconvex cases.