UltraSam: A Foundation Model for Ultrasound using Large Open-Access Segmentation Datasets
PositiveArtificial Intelligence
The introduction of UltraSam marks a significant advancement in automated ultrasound image analysis, a field often hindered by anatomical complexity and a scarcity of annotated data. By compiling the US-43d dataset, which includes over 280,000 images and segmentation masks for more than 50 anatomical structures, researchers have created a robust foundation for training the UltraSam model. This model, an adaptation of the Segment Anything Model (SAM), demonstrates vastly improved performance in prompt-based segmentation across three diverse public datasets. Furthermore, an UltraSam-initialized Vision Transformer has outperformed models initialized with ImageNet, SAM, and MedSAM in various downstream segmentation and classification tasks. This progress not only showcases UltraSam's foundational capabilities but also highlights its potential for fine-tuning in various medical imaging applications, ultimately enhancing the accuracy and efficiency of ultrasound diagnostics.
— via World Pulse Now AI Editorial System
