FOAM: Blocked State Folding for Memory-Efficient LLM Training

arXiv — cs.LGTuesday, December 9, 2025 at 5:00:00 AM
  • The introduction of the Folded Optimizer with Approximate Moment (FOAM) presents a new approach to training large language models (LLMs) by compressing optimizer states through block-wise gradient means and a residual correction mechanism. This method aims to alleviate memory bottlenecks associated with traditional optimizers like Adam, which are often memory-intensive during training.
  • FOAM's development is significant as it not only reduces total training memory but also maintains convergence rates comparable to vanilla Adam, potentially enhancing the efficiency of LLM training processes. This advancement could lead to more accessible and scalable AI solutions.
  • The emergence of FOAM aligns with ongoing efforts in the AI community to improve optimization algorithms, as seen with other recent innovations like HVAdam and AdamNX, which also seek to bridge performance gaps in adaptive optimizers. These developments reflect a broader trend towards optimizing resource usage in AI training, addressing the increasing demand for efficient computational methods.
— via World Pulse Now AI Editorial System

Was this article worth reading? Share it

Recommended apps based on your readingExplore all apps
Continue Readings
Correction of Decoupled Weight Decay
NeutralArtificial Intelligence
A recent study challenges the conventional approach to decoupled weight decay in optimization algorithms, specifically questioning the long-held assumption that it should be proportional to the learning rate. The research suggests that a proportionality to the square of the learning rate may be more appropriate, based on steady-state orthogonality arguments. However, findings indicate minimal impact on training dynamics when the perpendicular component of updates is removed.
Arc Gradient Descent: A Mathematically Derived Reformulation of Gradient Descent with Phase-Aware, User-Controlled Step Dynamics
PositiveArtificial Intelligence
The paper introduces Arc Gradient Descent (ArcGD), a new optimizer that reformulates traditional gradient descent methods to incorporate phase-aware and user-controlled step dynamics. The evaluation of ArcGD shows it outperforming the Adam optimizer on a non-convex benchmark and a real-world ML dataset, particularly in challenging scenarios like the Rosenbrock function and CIFAR-10 image classification.
Stochastic Approximation with Block Coordinate Optimal Stepsizes
NeutralArtificial Intelligence
The recent study on stochastic approximation with block-coordinate optimal stepsizes introduces adaptive stepsize rules designed to minimize the expected distance from an unknown target point. These rules utilize online estimates of the second moment of the search direction, leading to a new method that competes effectively with the widely used Adam algorithm while requiring less memory and fewer hyper-parameters.
ADAM Optimization with Adaptive Batch Selection
PositiveArtificial Intelligence
The introduction of Adam with Combinatorial Bandit Sampling (AdamCB) enhances the widely used Adam optimizer by integrating combinatorial bandit techniques, allowing for adaptive sample selection during neural network training. This approach addresses the inefficiencies of treating all data samples equally, leading to improved convergence rates and theoretical guarantees over previous methods.