DEMIST: Decoupled Multi-stream latent diffusion for Quantitative Myelin Map Synthesis

arXiv — cs.CVThursday, November 27, 2025 at 5:00:00 AM
  • A new method called DEMIST has been introduced for synthesizing quantitative magnetization transfer (qMT) maps, specifically pool size ratio (PSR) maps, from standard T1-weighted and FLAIR images using a 3D latent diffusion model. This approach utilizes a two-stage process involving separate autoencoders and a conditional diffusion model with decoupled conditioning mechanisms.
  • This development is significant for the assessment of multiple sclerosis (MS), as it allows for the generation of myelin-sensitive biomarkers without the need for lengthy specialized scans, potentially improving diagnostic efficiency and patient outcomes.
  • The integration of advanced techniques such as ControlNet and LoRA-modulated attention in DEMIST reflects a broader trend in the field of medical imaging, where innovative diffusion models are being employed to enhance image quality and reconstruction processes, paralleling advancements seen in other areas like real-world image super-resolution and MRI reconstruction.
— via World Pulse Now AI Editorial System

Was this article worth reading? Share it

Recommended apps based on your readingExplore all apps
Continue Readings
PG-ControlNet: A Physics-Guided ControlNet for Generative Spatially Varying Image Deblurring
PositiveArtificial Intelligence
PG-ControlNet has been introduced as a novel framework for spatially varying image deblurring, addressing the challenges posed by complex motion and noise. This approach reconciles model-based deep unrolling methods with generative models, capturing minute variations in degradation patterns through a dense continuum of high-dimensional compressed kernels.
A Gray-box Attack against Latent Diffusion Model-based Image Editing by Posterior Collapse
PositiveArtificial Intelligence
Recent advancements in Latent Diffusion Models (LDMs) have prompted the introduction of the Posterior Collapse Attack (PCA), a novel framework aimed at protecting images from unauthorized manipulation. This approach draws on the posterior collapse phenomenon observed in Variational Autoencoder (VAE) training, highlighting two distinct collapse types: diffusion collapse and concentration collapse.
Video Generation Models Are Good Latent Reward Models
PositiveArtificial Intelligence
Recent advancements in reward feedback learning (ReFL) highlight the effectiveness of video generation models as latent reward models, addressing significant challenges in aligning video generation with human preferences. Traditional video reward models have limitations due to their reliance on pixel-space inputs, which complicate the optimization process and increase memory usage.
OmniRefiner: Reinforcement-Guided Local Diffusion Refinement
PositiveArtificial Intelligence
OmniRefiner has been introduced as a detail-aware refinement framework aimed at improving reference-guided image generation. This framework addresses the limitations of current diffusion models, which often fail to retain fine-grained visual details during image refinement due to inherent VAE-based latent compression issues. By employing a two-stage correction process, OmniRefiner enhances pixel-level consistency and structural fidelity in generated images.
Directional Optimization Asymmetry in Transformers: A Synthetic Stress Test
NeutralArtificial Intelligence
A recent study has introduced a synthetic stress test for Transformers, revealing a significant directional optimization gap in models like GPT-2. This research challenges the notion of reversal invariance in Transformers, suggesting that their architecture may contribute to directional failures observed in natural language processing tasks.
Generative Model-Aided Continual Learning for CSI Feedback in FDD mMIMO-OFDM Systems
PositiveArtificial Intelligence
A new study proposes a generative adversarial network (GAN)-based approach to enhance channel state information (CSI) feedback in frequency division duplexing (FDD) massive multiple-input multiple-output (mMIMO) orthogonal frequency division multiplexing (OFDM) systems. This method addresses challenges related to user mobility and catastrophic forgetting, enabling continual learning and improved performance across varying environments.
EfficientXpert: Efficient Domain Adaptation for Large Language Models via Propagation-Aware Pruning
PositiveArtificial Intelligence
EfficientXpert has been introduced as a lightweight domain-pruning framework designed to enhance the deployment of large language models (LLMs) in specialized fields such as healthcare, law, and finance. By integrating a propagation-aware pruning criterion with an efficient adapter-update algorithm, it allows for a one-step transformation of general pretrained models into domain-adapted experts while maintaining high performance at reduced model sizes.
Comparative Analysis of LoRA-Adapted Embedding Models for Clinical Cardiology Text Representation
PositiveArtificial Intelligence
A recent study evaluated ten transformer-based embedding models adapted for cardiology using Low-Rank Adaptation (LoRA) fine-tuning on a dataset of 106,535 cardiology text pairs. The results indicated that encoder-only architectures, particularly BioLinkBERT, outperformed larger decoder-based models in domain-specific performance while requiring fewer computational resources.