Enhancing DPSGD via Per-Sample Momentum and Low-Pass Filtering

arXiv — cs.LGThursday, November 13, 2025 at 5:00:00 AM
The recent submission of the paper 'Enhancing DPSGD via Per-Sample Momentum and Low-Pass Filtering' on arXiv presents a novel approach to improving Differentially Private Stochastic Gradient Descent (DPSGD), a method widely used for training deep neural networks while ensuring privacy. Traditional implementations of DPSGD often suffer from reduced accuracy due to the introduction of noise and bias. The proposed DP-PMLF method effectively mitigates these issues by combining per-sample momentum with a low-pass filtering strategy, which smooths gradient estimates and reduces sampling variance. The theoretical analysis provided in the paper indicates an improved convergence rate while maintaining rigorous differential privacy guarantees. Empirical evaluations further demonstrate that DP-PMLF significantly enhances the balance between privacy and utility compared to existing state-of-the-art DPSGD variants. This advancement is crucial for the ongoing development of privacy-preserving machin…
— via World Pulse Now AI Editorial System

Was this article worth reading? Share it

Recommended Readings
On the Relationship Between Adversarial Robustness and Decision Region in Deep Neural Networks
PositiveArtificial Intelligence
The article discusses the evaluation of Deep Neural Networks (DNNs) based on their generalization performance and robustness against adversarial attacks. It highlights the challenges in assessing DNNs solely through generalization metrics as their performance has reached state-of-the-art levels. The study introduces the concept of the Populated Region Set (PRS) to analyze the internal properties of DNNs that influence their robustness, revealing that a low PRS ratio correlates with improved adversarial robustness.
Sequentially Auditing Differential Privacy
PositiveArtificial Intelligence
A new practical sequential test for auditing differential privacy guarantees of black-box mechanisms has been proposed. This test processes streams of outputs, allowing for anytime-valid inference while controlling Type I error. It significantly reduces the sample size needed for detecting violations from 50,000 to just a few hundred examples across various mechanisms. Notably, it can identify DP-SGD privacy violations in under one training run, unlike previous methods that required complete model training.
FQ-PETR: Fully Quantized Position Embedding Transformation for Multi-View 3D Object Detection
PositiveArtificial Intelligence
The paper titled 'FQ-PETR: Fully Quantized Position Embedding Transformation for Multi-View 3D Object Detection' addresses the challenges of deploying PETR models in autonomous driving due to their high computational costs and memory requirements. It introduces FQ-PETR, a fully quantized framework that aims to enhance efficiency without sacrificing accuracy. Key innovations include a Quantization-Friendly LiDAR-ray Position Embedding and techniques to mitigate accuracy degradation typically associated with quantization methods.