Machine learning in an expectation-maximisation framework for nowcasting

arXiv — stat.MLTuesday, December 9, 2025 at 5:00:00 AM
  • A new study introduces an expectation-maximisation framework for nowcasting, utilizing machine learning techniques to address the challenges posed by incomplete information in decision-making processes. This framework incorporates neural networks and XGBoost to model both the occurrence and reporting processes of events, particularly in the context of Argentinian Covid-19 data.
  • The development is significant as it enhances the ability to make informed decisions in real-time by effectively leveraging observable information, thereby reducing the risks associated with under- or overestimating situations due to reporting delays.
  • This advancement aligns with ongoing efforts in the field of artificial intelligence to improve predictive modeling, as seen in various applications ranging from healthcare to epidemic predictions. The integration of neural networks and other machine learning models reflects a growing trend towards more robust and interpretable AI solutions in high-stakes environments.
— via World Pulse Now AI Editorial System

Was this article worth reading? Share it

Recommended apps based on your readingExplore all apps
Continue Readings
Heuristics for Combinatorial Optimization via Value-based Reinforcement Learning: A Unified Framework and Analysis
NeutralArtificial Intelligence
A recent study has introduced a unified framework for applying value-based reinforcement learning (RL) to combinatorial optimization (CO) problems, utilizing Markov decision processes (MDPs) to enhance the training of neural networks as learned heuristics. This approach aims to reduce the reliance on expert-designed heuristics, potentially transforming how CO problems are addressed in various fields.
RaX-Crash: A Resource Efficient and Explainable Small Model Pipeline with an Application to City Scale Injury Severity Prediction
NeutralArtificial Intelligence
RaX-Crash has been developed as a resource-efficient and explainable small model pipeline aimed at predicting injury severity from motor vehicle collisions in New York City, utilizing a dataset with over one hundred thousand records. The model employs compact tree-based ensembles, specifically Random Forest and XGBoost, achieving notable accuracy compared to small language models.
LayerPipe2: Multistage Pipelining and Weight Recompute via Improved Exponential Moving Average for Training Neural Networks
PositiveArtificial Intelligence
The paper 'LayerPipe2' introduces a refined method for training neural networks by addressing gradient delays in multistage pipelining, enhancing the efficiency of convolutional, fully connected, and spiking networks. This builds on the previous work 'LayerPipe', which successfully accelerated training through overlapping computations but lacked a formal understanding of gradient delay requirements.
An Improved Ensemble-Based Machine Learning Model with Feature Optimization for Early Diabetes Prediction
PositiveArtificial Intelligence
A new ensemble-based machine learning model has been developed to enhance early diabetes prediction using the BRFSS dataset, which includes over 253,000 health records. The model employs techniques like SMOTE and Tomek Links to address class imbalance and achieves a strong ROC-AUC score of approximately 0.96 through various algorithms, including Random Forest and XGBoost.
GLL: A Differentiable Graph Learning Layer for Neural Networks
PositiveArtificial Intelligence
A new study introduces GLL, a differentiable graph learning layer designed for neural networks, which integrates graph learning techniques with backpropagation equations for improved label predictions. This approach addresses the limitations of traditional deep learning architectures that do not utilize relational information between samples effectively.
Explosive neural networks via higher-order interactions in curved statistical manifolds
NeutralArtificial Intelligence
A recent study introduces curved neural networks as a novel model for exploring higher-order interactions in neural networks, leveraging a generalization of the maximum entropy principle. These networks demonstrate a self-regulating annealing process that enhances memory retrieval, leading to explosive phase transitions characterized by multi-stability and hysteresis effects.
Predictive Modeling of I/O Performance for Machine Learning Training Pipelines: A Data-Driven Approach to Storage Optimization
PositiveArtificial Intelligence
A recent study has introduced a machine learning approach to predict I/O performance for machine learning training pipelines, addressing the growing issue of data I/O bottlenecks that hinder GPU utilization. By systematically benchmarking various storage backends, the research identified optimal configurations, achieving an impressive R-squared of 0.991 with the XGBoost model, which predicts I/O throughput with an average error of 11.8%.
CoGraM: Context-sensitive granular optimization method with rollback for robust model fusion
PositiveArtificial Intelligence
CoGraM, or Contextual Granular Merging, is a new optimization method designed to enhance the merging of neural networks without the need for retraining, addressing common issues such as accuracy loss and instability in federated and distributed learning environments.