Scaling Self-Supervised and Cross-Modal Pretraining for Volumetric CT Transformers

arXiv — cs.CVMonday, November 24, 2025 at 5:00:00 AM
  • A new foundation model named SPECTRE has been introduced, utilizing a fully transformer-based architecture for volumetric computed tomography (CT). This model employs self-supervised and cross-modal pretraining strategies to effectively learn CT representations, addressing challenges such as extreme token scaling and weak clinical supervision.
  • The development of SPECTRE is significant as it demonstrates the potential for high-performing, generalizable CT representations trained exclusively on openly available datasets. This could enhance diagnostic capabilities in medical imaging.
  • The introduction of SPECTRE aligns with ongoing advancements in AI-driven medical imaging, where models like X-WIN and PoCGM are also addressing limitations in traditional imaging techniques. These developments highlight a broader trend towards improving image quality and diagnostic accuracy through innovative AI frameworks.
— via World Pulse Now AI Editorial System

Was this article worth reading? Share it

Recommended apps based on your readingExplore all apps
Continue Readings
Exploiting DINOv3-Based Self-Supervised Features for Robust Few-Shot Medical Image Segmentation
PositiveArtificial Intelligence
A novel framework named DINO-AugSeg has been proposed to enhance few-shot medical image segmentation by leveraging DINOv3-based self-supervised features. This approach addresses the challenge of limited annotated training data in clinical settings, utilizing wavelet-based feature-level augmentation and contextual information-guided fusion to improve segmentation accuracy across various imaging modalities such as MRI and CT.
Automated Machine Learning in Radiomics: A Comparative Evaluation of Performance, Efficiency and Accessibility
NeutralArtificial Intelligence
A recent study evaluated the performance, efficiency, and accessibility of automated machine learning (AutoML) frameworks in the field of radiomics, focusing on their ability to assist researchers without programming skills in developing predictive models. The study tested six general-purpose and five radiomics-specific frameworks across ten diverse datasets, revealing the need for further development tailored to radiomics challenges.

Ready to build your own newsroom?

Subscribe to unlock a personalised feed, podcasts, newsletters, and notifications tailored to the topics you actually care about