Variance Reduction via Resampling and Experience Replay

arXiv — stat.MLFriday, November 14, 2025 at 5:00:00 AM
The paper titled 'Variance Reduction via Resampling and Experience Replay' presents a theoretical framework for experience replay, a key technique in reinforcement learning that improves learning stability by reusing past experiences. The authors model experience replay using resampled U- and V-statistics, providing variance reduction guarantees. They apply this framework to policy evaluation tasks with the Least-Squares Temporal Difference (LSTD) algorithm and a model-free algorithm based on Partial Differential Equations (PDEs), achieving notable improvements in stability and efficiency, especially in scenarios with limited data. Additionally, the framework is extended to kernel ridge regression, demonstrating a reduction in computational cost from O(n^3) to O(n^2).
— via World Pulse Now AI Editorial System

Was this article worth reading? Share it

Recommended Readings
FusionFM: All-in-One Multi-Modal Image Fusion with Flow Matching
PositiveArtificial Intelligence
FusionFM presents a novel approach to multi-modal image fusion that overcomes the limitations of traditional task-specific models. By utilizing a probabilistic transport method and the flow matching paradigm, it enhances sampling efficiency and structural consistency in fused images. The method addresses the challenge of insufficient high-quality fused images for supervision by employing a task-aware selection function to identify reliable pseudo-labels. Additionally, a Fusion Refiner module systematically improves degraded components in the fusion process.