From pretraining to privacy: federated ultrasound foundation model with self-supervised learning

Nature — Machine LearningFriday, November 21, 2025 at 12:00:00 AM
  • A new federated ultrasound foundation model utilizing self-supervised learning has been developed, enhancing the capabilities of machine learning in medical imaging. This model aims to improve the accuracy and efficiency of ultrasound diagnostics while ensuring patient privacy through federated learning techniques.
  • This advancement is significant as it addresses the growing need for effective medical imaging solutions that respect patient confidentiality. By leveraging self-supervised learning, the model can learn from decentralized data sources without compromising sensitive information.
  • The development reflects a broader trend in artificial intelligence where privacy-preserving techniques are increasingly prioritized. As machine learning continues to evolve, the integration of self-supervised learning with federated models may unlock new potentials in various fields, including healthcare, while also raising discussions about data ethics and the future of AI.
— via World Pulse Now AI Editorial System

Was this article worth reading? Share it

Recommended apps based on your readingExplore all apps
Continue Readings
Wasserstein-p Central Limit Theorem Rates: From Local Dependence to Markov Chains
NeutralArtificial Intelligence
A recent study has established optimal finite-time central limit theorem (CLT) rates for multivariate dependent data in Wasserstein-$p$ distance, focusing on locally dependent sequences and geometrically ergodic Markov chains. The findings reveal the first optimal $ ext{O}(n^{-1/2})$ rate in $ ext{W}_1$ and significant improvements for $ ext{W}_p$ rates under mild moment assumptions.
On the use of graph models to achieve individual and group fairness
NeutralArtificial Intelligence
A new theoretical framework utilizing Sheaf Diffusion has been proposed to enhance fairness in machine learning algorithms, particularly in critical sectors such as justice, healthcare, and finance. This method aims to project input data into a bias-free space, thereby addressing both individual and group fairness metrics.
Multicenter evaluation of interpretable AI for coronary artery disease diagnosis from PET biomarkers
NeutralArtificial Intelligence
A multicenter evaluation has been conducted on interpretable artificial intelligence (AI) for diagnosing coronary artery disease (CAD) using PET biomarkers, as reported in Nature — Machine Learning. This study aims to enhance the accuracy and reliability of CAD diagnoses through advanced machine learning techniques.
AI tools boost individual scientists but could limit research as a whole
NeutralArtificial Intelligence
Recent advancements in artificial intelligence (AI) tools are enhancing the capabilities of individual scientists, allowing for more efficient research processes. However, there are concerns that this reliance on AI may limit the overall scope and depth of research as a whole.
What the future holds for AI – from the people shaping it
NeutralArtificial Intelligence
The future of artificial intelligence (AI) is being shaped by ongoing discussions among key figures in the field, as highlighted in a recent article from Nature — Machine Learning. These discussions focus on the transformative potential of AI across various sectors, including technology, healthcare, and materials science.
Sequence-based generative AI design of versatile tryptophan synthases
NeutralArtificial Intelligence
A recent study published in Nature — Machine Learning presents a sequence-based generative AI design for versatile tryptophan synthases, aiming to enhance the understanding and engineering of these important enzymes. This innovative approach leverages machine learning techniques to optimize the design process, potentially leading to significant advancements in biotechnology and synthetic biology.
LLMs behaving badly: mistrained AI models quickly go off the rails
NegativeArtificial Intelligence
Recent studies have highlighted the troubling behavior of Large Language Models (LLMs), which can quickly deviate from expected outputs due to inadequate training. This phenomenon raises significant concerns regarding the reliability and safety of AI models, particularly as they are increasingly integrated into critical applications.
HumanBase: an interactive AI platform for human biology
NeutralArtificial Intelligence
HumanBase has emerged as an interactive AI platform focused on human biology, leveraging advancements in machine learning to enhance understanding and analysis of biological data. This platform aims to facilitate research and applications in the field of human biology by providing a user-friendly interface for data interaction.

Ready to build your own newsroom?

Subscribe to unlock a personalised feed, podcasts, newsletters, and notifications tailored to the topics you actually care about