UltraGS: Gaussian Splatting for Ultrasound Novel View Synthesis

arXiv — cs.CVWednesday, November 12, 2025 at 5:00:00 AM
The introduction of UltraGS marks a significant advancement in ultrasound imaging, a vital tool in non-invasive clinical diagnostics. Traditional ultrasound techniques face challenges due to their limited field of view, complicating novel view synthesis. UltraGS addresses this issue through a depth-aware Gaussian splatting strategy, which assigns learnable fields of view to each Gaussian, enhancing depth prediction and structural representation. Additionally, the lightweight rendering function SH-DARS integrates ultrasound-specific wave physics, accurately modeling tissue intensity. The framework is validated with the Clinical Ultrasound Examination Dataset, showcasing its superiority with state-of-the-art results in PSNR (up to 29.55), SSIM (up to 0.89), and MSE (as low as 0.002), while achieving real-time synthesis at 64.69 fps. The open-source availability of the code and dataset further promotes accessibility and innovation in the field.
— via World Pulse Now AI Editorial System

Was this article worth reading? Share it

Recommended Readings
Duplex-GS: Proxy-Guided Weighted Blending for Real-Time Order-Independent Gaussian Splatting
PositiveArtificial Intelligence
The paper 'Duplex-GS: Proxy-Guided Weighted Blending for Real-Time Order-Independent Gaussian Splatting' introduces a new framework aimed at enhancing the efficiency of 3D Gaussian Splatting. This method integrates proxy Gaussian representations with order-independent rendering techniques, achieving photorealistic results while maintaining real-time performance. The proposed Duplex-GS framework addresses the computational overhead associated with traditional alpha-blending operations, particularly on resource-constrained platforms.
From Attention to Frequency: Integration of Vision Transformer and FFT-ReLU for Enhanced Image Deblurring
PositiveArtificial Intelligence
Image deblurring is a crucial aspect of computer vision, focused on restoring sharp images from blurry ones caused by motion or camera shake. Traditional deep learning methods, including CNNs and Vision Transformers (ViTs), face challenges with complex blurs and high computational demands. A new dual-domain architecture integrates Vision Transformers with a frequency-domain FFT-ReLU module, enhancing the ability to suppress blur artifacts while preserving details, achieving superior performance metrics such as PSNR and SSIM in extensive experiments.