U-REPA: Aligning Diffusion U-Nets to ViTs

arXiv — cs.CVTuesday, November 25, 2025 at 5:00:00 AM

Was this article worth reading? Share it

Recommended apps based on your readingExplore all apps
Continue Readings
A Highly Efficient Diversity-based Input Selection for DNN Improvement Using VLMs
PositiveArtificial Intelligence
A recent study has introduced Concept-Based Diversity (CBD), a highly efficient metric for image inputs that utilizes Vision-Language Models (VLMs) to enhance the performance of Deep Neural Networks (DNNs) through improved input selection. This approach addresses the computational intensity and scalability issues associated with traditional diversity-based selection methods.
M3SR: Multi-Scale Multi-Perceptual Mamba for Efficient Spectral Reconstruction
PositiveArtificial Intelligence
The M3SR architecture, an advancement of the Mamba framework, has been introduced to enhance spectral reconstruction in hyperspectral imaging by addressing limitations in spatial perception and feature extraction. This multi-scale, multi-perceptual model integrates a fusion block within a U-Net structure to improve the analysis of complex image data.
ISLA: A U-Net for MRI-based acute ischemic stroke lesion segmentation with deep supervision, attention, domain adaptation, and ensemble learning
PositiveArtificial Intelligence
A new deep learning model named ISLA (Ischemic Stroke Lesion Analyzer) has been introduced for the segmentation of acute ischemic stroke lesions in MRI scans. This model leverages the U-Net architecture and incorporates deep supervision, attention mechanisms, and domain adaptation, trained on over 1500 participants from multiple centers.
NOVAK: Unified adaptive optimizer for deep neural networks
PositiveArtificial Intelligence
The recent introduction of NOVAK, a unified adaptive optimizer for deep neural networks, combines several advanced techniques including adaptive moment estimation and lookahead synchronization, aiming to enhance the performance and efficiency of neural network training.
Out-of-distribution generalization of deep-learning surrogates for 2D PDE-generated dynamics in the small-data regime
NeutralArtificial Intelligence
A recent study published on arXiv investigates the out-of-distribution generalization capabilities of deep-learning surrogates for two-dimensional partial differential equation (PDE) dynamics, particularly under small-data conditions. The research introduces a multi-channel U-Net architecture and evaluates its performance against various models, including ViT and PDE-Transformer, across different PDE families.
When Models Know When They Do Not Know: Calibration, Cascading, and Cleaning
PositiveArtificial Intelligence
A recent study titled 'When Models Know When They Do Not Know: Calibration, Cascading, and Cleaning' proposes a universal training-free method for model calibration, cascading, and data cleaning, enhancing models' ability to recognize their limitations. The research highlights that higher confidence correlates with higher accuracy and that models calibrated on validation sets maintain their calibration on test sets.
Hierarchical Online-Scheduling for Energy-Efficient Split Inference with Progressive Transmission
PositiveArtificial Intelligence
A novel framework named ENACHI has been proposed for hierarchical online scheduling in energy-efficient split inference with Deep Neural Networks (DNNs), addressing the inefficiencies in current scheduling methods that fail to optimize both task-level decisions and packet-level dynamics. This framework integrates a two-tier Lyapunov-based approach and progressive transmission techniques to enhance adaptivity and resource utilization.
IGAN: A New Inception-based Model for Stable and High-Fidelity Image Synthesis Using Generative Adversarial Networks
PositiveArtificial Intelligence
A new model called Inception Generative Adversarial Network (IGAN) has been introduced, addressing the challenges of high-quality image synthesis and training stability in Generative Adversarial Networks (GANs). The IGAN model utilizes deeper inception-inspired and dilated convolutions, achieving significant improvements in image fidelity with a Frechet Inception Distance (FID) of 13.12 and 15.08 on the CUB-200 and ImageNet datasets, respectively.

Ready to build your own newsroom?

Subscribe to unlock a personalised feed, podcasts, newsletters, and notifications tailored to the topics you actually care about