Addressing A Posteriori Performance Degradation in Neural Network Subgrid Stress Models

arXiv — cs.LGMonday, November 24, 2025 at 5:00:00 AM
  • Neural network subgrid stress models exhibit a significant performance gap between a priori and a posteriori evaluations, particularly in Large Eddy Simulations (LES). This gap can be mitigated by employing training data augmentation and simplifying input complexity, resulting in enhanced robustness across different LES codes.
  • The advancements in neural network training methods are crucial for improving the reliability of simulations in computational fluid dynamics. By addressing the performance degradation, these developments could lead to more accurate predictive models, benefiting various applications in engineering and scientific research.
— via World Pulse Now AI Editorial System

Was this article worth reading? Share it

Recommended apps based on your readingExplore all apps
Continue Readings
When Active Learning Fails, Uncalibrated Out of Distribution Uncertainty Quantification Might Be the Problem
NeutralArtificial Intelligence
A recent study highlights the challenges of estimating prediction uncertainty in active learning campaigns for materials discovery, indicating that uncalibrated out-of-distribution uncertainty quantification may hinder model performance. The research evaluates various uncertainty estimation methods using ensembles of ALIGNN, eXtreme Gradient Boost, Random Forest, and Neural Network architectures, focusing on tasks related to solubility, bandgap, and formation energy predictions.
FAIR-Pruner: Leveraging Tolerance of Difference for Flexible Automatic Layer-Wise Neural Network Pruning
PositiveArtificial Intelligence
The FAIR-Pruner method has been introduced to enhance neural network pruning by adaptively determining the sparsity levels of each layer, addressing the limitations of traditional uniform pruning strategies that often lead to performance degradation. This innovative approach utilizes a novel indicator, Tolerance of Differences (ToD), to balance importance scores from different perspectives, thus improving efficiency in resource-limited environments.
Local Entropy Search over Descent Sequences for Bayesian Optimization
PositiveArtificial Intelligence
A new approach called local entropy search (LES) has been proposed for Bayesian optimization, focusing on refining design spaces through gradient descent. This method propagates posterior beliefs over objectives, resulting in a probability distribution that guides the selection of the next evaluation by maximizing mutual information. Empirical results indicate that LES demonstrates strong sample efficiency compared to existing optimization methods.