Poisson Informed Retinex Network for Extreme Low-Light Image Enhancement

arXiv — cs.CVMonday, November 3, 2025 at 5:00:00 AM
A new study introduces a Poisson Informed Retinex Network aimed at enhancing images captured in extreme low-light conditions. This innovative approach tackles the challenges of traditional noise assumptions, which often fail in real-world scenarios where noise is signal-dependent. By focusing on Poisson noise, the research promises significant improvements in image quality, making it a valuable advancement for fields like photography and surveillance. This development is crucial as it opens up new possibilities for clearer imaging in low-light environments.
— via World Pulse Now AI Editorial System

Was this article worth reading? Share it

Recommended apps based on your readingExplore all apps
Continue Readings
Convergence of Stochastic Gradient Langevin Dynamics in the Lazy Training Regime
NeutralArtificial Intelligence
A recent study published on arXiv presents a non-asymptotic convergence analysis of stochastic gradient Langevin dynamics (SGLD) in the lazy training regime, demonstrating that SGLD achieves exponential convergence to the empirical risk minimizer under certain conditions. The findings are supported by numerical examples in regression settings.
Bayes-DIC Net: Estimating Digital Image Correlation Uncertainty with Bayesian Neural Networks
PositiveArtificial Intelligence
A novel method called Bayes-DIC Net has been introduced to estimate uncertainty in Digital Image Correlation (DIC) using Bayesian Neural Networks. This method generates high-quality datasets based on non-uniform B-spline surfaces, enabling the construction of realistic displacement fields for training deep learning algorithms in DIC applications.
ImageNot: A contrast with ImageNet preserves model rankings
NeutralArtificial Intelligence
The introduction of ImageNot, a dataset designed to be significantly different from ImageNet while maintaining a similar scale, reveals that deep learning models retain their ranking when evaluated on this new dataset. This finding suggests that the relative performance of models is consistent across different datasets, despite variations in absolute accuracy.
Mitigating the Curse of Detail: Scaling Arguments for Feature Learning and Sample Complexity
NeutralArtificial Intelligence
A recent study published on arXiv addresses the complexities of feature learning in deep learning, proposing a heuristic method for predicting the scales at which various patterns emerge. This approach simplifies the analytical challenges associated with high-dimensional non-linear equations often encountered in deep learning problems.
A deep learning based radiomics model for differentiating intraparenchymal hematoma induced by cerebral venous thrombosis
NeutralArtificial Intelligence
A new study published in Nature — Machine Learning introduces a deep learning-based radiomics model designed to differentiate intraparenchymal hematoma caused by cerebral venous thrombosis. This model leverages advanced machine learning techniques to enhance diagnostic accuracy in medical imaging, particularly in identifying specific types of brain hemorrhages.
Pan-LUT: Efficient Pan-sharpening via Learnable Look-Up Tables
PositiveArtificial Intelligence
A novel pan-sharpening framework called Pan-LUT has been introduced, leveraging learnable look-up tables to enhance the processing of large remote sensing images efficiently. This method allows for the handling of 15K*15K images on a 24GB GPU, addressing the computational challenges faced by traditional deep learning approaches in real-world applications.
Atomic Diffusion Models for Small Molecule Structure Elucidation from NMR Spectra
PositiveArtificial Intelligence
A new framework named ChefNMR has been introduced to predict the structures of small molecules directly from 1D NMR spectra and chemical formulas, achieving over 65% accuracy in elucidating complex natural products. This advancement addresses the traditionally manual and expertise-heavy process of interpreting NMR data.
Convergence for Discrete Parameter Updates
PositiveArtificial Intelligence
A new study published on arXiv introduces a discrete parameter update approach for deep learning models, which aims to enhance training efficiency by avoiding the quantization of continuous updates. This method establishes convergence guarantees for a class of discrete schemes, exemplified by a multinomial update rule, and is supported by empirical evaluations.