THUNDER: Tile-level Histopathology image UNDERstanding benchmark

arXiv — cs.CVTuesday, October 28, 2025 at 4:00:00 AM
The recent advancements in digital pathology, particularly with the introduction of various foundation models for tile-level images, have made it essential to benchmark these methods. This benchmarking is crucial as it helps researchers and practitioners assess the effectiveness of different approaches in a rapidly evolving field. By establishing clear benchmarks, the community can better understand which methods work best for specific tasks, ultimately improving diagnostic accuracy and patient outcomes.
— via World Pulse Now AI Editorial System

Was this article worth reading? Share it

Recommended Readings
Explaining Digital Pathology Models via Clustering Activations
PositiveArtificial Intelligence
A new clustering-based explainability technique for digital pathology models using convolutional neural networks has been introduced. This method differs from traditional saliency map techniques by providing a global view of model behavior while offering detailed insights. The technique enhances understanding and confidence in model predictions, potentially accelerating clinical adoption. Its effectiveness was evaluated on a prostate cancer detection model, showcasing its practical utility in medical diagnostics.
Bias-Restrained Prefix Representation Finetuning for Mathematical Reasoning
PositiveArtificial Intelligence
The paper titled 'Bias-Restrained Prefix Representation Finetuning for Mathematical Reasoning' introduces a new method called Bias-REstrained Prefix Representation FineTuning (BREP ReFT). This approach aims to enhance the mathematical reasoning capabilities of models by addressing the limitations of existing Representation finetuning (ReFT) methods, which struggle with mathematical tasks. The study demonstrates that BREP ReFT outperforms both standard ReFT and weight-based Parameter-Efficient finetuning (PEFT) methods through extensive experiments.
Transformers know more than they can tell -- Learning the Collatz sequence
NeutralArtificial Intelligence
The study investigates the ability of transformer models to predict long steps in the Collatz sequence, a complex arithmetic function that maps odd integers to their successors. The accuracy of the models varies significantly depending on the base used for encoding, achieving up to 99.7% accuracy for bases 24 and 32, while dropping to 37% and 25% for bases 11 and 3. Despite these variations, all models exhibit a common learning pattern, accurately predicting inputs with similar residuals modulo 2^p.
Bridging Hidden States in Vision-Language Models
PositiveArtificial Intelligence
Vision-Language Models (VLMs) are emerging models that integrate visual content with natural language. Current methods typically fuse data either early in the encoding process or late through pooled embeddings. This paper introduces a lightweight fusion module utilizing cross-only, bidirectional attention layers to align hidden states from both modalities, enhancing understanding while keeping encoders non-causal. The proposed method aims to improve the performance of VLMs by leveraging the inherent structure of visual and textual data.
Higher-order Neural Additive Models: An Interpretable Machine Learning Model with Feature Interactions
PositiveArtificial Intelligence
Higher-order Neural Additive Models (HONAMs) have been introduced as an advancement over Neural Additive Models (NAMs), which are known for their predictive performance and interpretability. HONAMs address the limitation of NAMs by effectively capturing feature interactions of arbitrary orders, enhancing predictive accuracy while maintaining interpretability, crucial for high-stakes applications. The source code for HONAM is publicly available on GitHub.
MAFM^3: Modular Adaptation of Foundation Models for Multi-Modal Medical AI
PositiveArtificial Intelligence
The article introduces MAFM^3, a framework designed for the modular adaptation of foundation models in multi-modal medical AI. It addresses the challenge of limited data in medical imaging by allowing a single foundation model to adapt to various domains, tasks, and modalities using lightweight modular components. This approach enables flexible activation of specific capabilities based on the input type or clinical objective, improving multitask and multimodality adaptation.
{\Phi}eat: Physically-Grounded Feature Representation
PositiveArtificial Intelligence
The paper titled "\Phi eat: Physically-Grounded Feature Representation" introduces a new visual backbone designed to enhance self-supervised learning in vision tasks. Current self-supervised features often mix high-level semantics with low-level physical factors, which can limit their effectiveness in tasks requiring physical reasoning. The proposed \Phi eat model focuses on material identity and employs a pretraining strategy that contrasts spatial crops and physical augmentations of materials under various conditions, aiming to improve feature robustness.