Guaranteeing Conservation of Integrals with Projection in Physics-Informed Neural Networks

arXiv — cs.LGThursday, November 13, 2025 at 5:00:00 AM
The introduction of a novel projection method for Physics-Informed Neural Networks (PINNs) represents a significant advancement in ensuring the conservation of integral quantities, which is crucial for maintaining the integrity of physical laws in computational models. Traditional soft constraints used in PINNs often allow for solutions that deviate from these laws, leading to inaccuracies. The newly proposed method, known as PINN-Proj, effectively addresses this issue by guaranteeing the conservation of both linear and quadratic integrals. By solving constrained non-linear optimization problems, the researchers demonstrated that PINN-Proj reduces errors in conservation by three to four orders of magnitude compared to previous methods, while also marginally improving the accuracy of partial differential equation (PDE) solutions. Additionally, the projection method enhances convergence by improving the conditioning of the loss landscape, suggesting its potential as a general framework f…
— via World Pulse Now AI Editorial System

Was this article worth reading? Share it

Recommended Readings
One-Shot Transfer Learning for Nonlinear PDEs with Perturbative PINNs
PositiveArtificial Intelligence
A new framework for solving nonlinear partial differential equations (PDEs) has been proposed, integrating perturbation theory with one-shot transfer learning in Physics-Informed Neural Networks (PINNs). This method decomposes nonlinear PDEs into linear subproblems, which are solved using a Multi-Head PINN. The approach allows for quick adaptation to new PDE instances with varying conditions, achieving errors around 1e-3 and adaptation times under 0.2 seconds, demonstrating comparable accuracy to classical solvers.
Augmented data and neural networks for robust epidemic forecasting: application to COVID-19 in Italy
PositiveArtificial Intelligence
This study introduces a data augmentation strategy designed to enhance the training of neural networks, thereby improving prediction accuracy. By generating synthetic data using a compartmental model and incorporating uncertainty, the model is calibrated with available data and integrated with deep learning techniques. The findings indicate that neural networks trained on these augmented datasets demonstrate significantly better predictive performance, particularly with Physics-Informed Neural Networks (PINNs) and Nonlinear Autoregressive (NAR) models.
Neuro-Spectral Architectures for Causal Physics-Informed Networks
PositiveArtificial Intelligence
Neuro-Spectral Architectures (NeuSA) have been introduced as a new class of Physics-Informed Neural Networks (PINNs) aimed at solving complex partial differential equations (PDEs). Traditional MLP-based PINNs often struggle with convergence in intricate initial value problems, leading to solutions that can violate causality. NeuSA addresses these challenges by learning a projection of the underlying PDE onto a spectral basis, thus improving convergence and enforcing causality through its integration with Neural ODEs.