VIKING: Deep variational inference with stochastic projections

arXiv — stat.MLWednesday, October 29, 2025 at 4:00:00 AM
The recent paper titled 'VIKING: Deep variational inference with stochastic projections' addresses the challenges faced by variational mean field approximations in overparametrized deep neural networks. It highlights the common issues of unstable training and poor predictive power, which have hindered the effectiveness of Bayesian methods. By proposing a new variational family based on recent advancements in neural network reparametrizations, this work aims to enhance prediction quality and uncertainty estimation, making it a significant contribution to the field of deep learning.
— via World Pulse Now AI Editorial System

Was this article worth reading? Share it

Recommended apps based on your readingExplore all apps
Continue Readings
RNNs perform task computations by dynamically warping neural representations
NeutralArtificial Intelligence
A recent study has proposed that recurrent neural networks (RNNs) perform computations by dynamically warping their representations of task variables. This hypothesis is supported by a newly developed Riemannian geometric framework that characterizes the manifold topology and geometry of RNNs based on their input data, shedding light on the time-varying geometry of these networks.
Continuous-time reinforcement learning for optimal switching over multiple regimes
NeutralArtificial Intelligence
A recent study published on arXiv explores continuous-time reinforcement learning (RL) for optimal switching across multiple regimes, utilizing an exploratory formulation with entropy regularization. The research establishes the well-posedness of Hamilton-Jacobi-Bellman equations and characterizes the optimal policy, demonstrating convergence of policy iterations and value functions between exploratory and classical formulations.
Solving Inverse Problems with Deep Linear Neural Networks: Global Convergence Guarantees for Gradient Descent with Weight Decay
NeutralArtificial Intelligence
A recent study published on arXiv investigates the capabilities of deep linear neural networks in solving underdetermined linear inverse problems, specifically focusing on their convergence when trained using gradient descent with weight decay regularization. The findings suggest that these networks can adapt to unknown low-dimensional structures in the source signal, providing a theoretical basis for their empirical success in machine learning applications.
A result relating convex n-widths to covering numbers with some applications to neural networks
NeutralArtificial Intelligence
A recent study published on arXiv presents a significant result linking convex n-widths to covering numbers, particularly in the context of neural networks. This research addresses the challenges of approximating high-dimensional function classes using a limited number of basis functions, revealing that certain classes can be effectively approximated despite the complexities of high-dimensional spaces.
Using physics-inspired Singular Learning Theory to understand grokking & other phase transitions in modern neural networks
NeutralArtificial Intelligence
A recent study has applied Singular Learning Theory (SLT), a framework inspired by physics, to analyze grokking and other phase transitions in neural networks. The research empirically investigates SLT's free energy and local learning coefficients, revealing insights into the behavior of neural networks under various conditions.
CoGraM: Context-sensitive granular optimization method with rollback for robust model fusion
PositiveArtificial Intelligence
CoGraM (Contextual Granular Merging) is a newly introduced optimization method designed to enhance the merging of neural networks without retraining, addressing issues of accuracy and stability that are prevalent in existing methods like Fisher merging. This multi-stage, context-sensitive approach utilizes rollback mechanisms to prevent harmful updates, thereby improving the robustness of the merged network.
Why Rectified Power Unit Networks Fail and How to Improve It: An Effective Field Theory Perspective
PositiveArtificial Intelligence
The introduction of the Modified Rectified Power Unit (MRePU) activation function addresses critical issues faced by deep Rectified Power Unit (RePU) networks, such as instability during training due to vanishing or exploding values. This new function retains the advantages of differentiability and universal approximation while ensuring stable training conditions, as demonstrated through extensive theoretical analysis and experiments.
Learning to Solve Constrained Bilevel Control Co-Design Problems
NeutralArtificial Intelligence
A new framework for Learning to Optimize (L2O) has been proposed to address the challenges of solving constrained bilevel control co-design problems, which are often complex and time-sensitive. This framework utilizes modern differentiation techniques to enhance the efficiency of finding solutions to these optimization problems.