The SA-FARI Dataset: Segment Anything in Footage of Animals for Recognition and Identification

arXiv — cs.CVThursday, November 20, 2025 at 5:00:00 AM

Was this article worth reading? Share it

Recommended Readings
Skin-R1: Toward Trustworthy Clinical Reasoning for Dermatological Diagnosis
PositiveArtificial Intelligence
The article discusses SkinR1, a new vision-language model (VLM) aimed at improving clinical reasoning in dermatological diagnosis. It addresses limitations such as data heterogeneity, lack of diagnostic rationales, and challenges in scalability. SkinR1 integrates deep reasoning with reinforcement learning to enhance diagnostic accuracy and reliability.
Look, Zoom, Understand: The Robotic Eyeball for Embodied Perception
PositiveArtificial Intelligence
The article discusses EyeVLA, a robotic eyeball designed for active visual perception in embodied AI systems. Unlike traditional models that passively process images, EyeVLA actively acquires detailed information while managing spatial constraints. This innovation aims to enhance the effectiveness of robotic applications in open-world environments by integrating action tokens with vision-language models (VLMs) for improved understanding and interaction.
Let Language Constrain Geometry: Vision-Language Models as Semantic and Spatial Critics for 3D Generation
PositiveArtificial Intelligence
The paper introduces VLM3D, a novel framework that utilizes vision-language models (VLMs) to enhance text-to-3D generation. It addresses two major limitations in current models: the lack of fine-grained semantic alignment and inadequate 3D spatial understanding. VLM3D employs a dual-query critic signal to evaluate both semantic fidelity and geometric coherence, significantly improving the generation process. The framework demonstrates its effectiveness across different paradigms, marking a step forward in 3D generation technology.
GMAT: Grounded Multi-Agent Clinical Description Generation for Text Encoder in Vision-Language MIL for Whole Slide Image Classification
PositiveArtificial Intelligence
The article presents a new framework called GMAT, which enhances Multiple Instance Learning (MIL) for whole slide image (WSI) classification. By integrating vision-language models (VLMs), GMAT aims to improve the generation of clinical descriptions that are more expressive and medically specific. This addresses limitations in existing methods that rely on large language models (LLMs) for generating descriptions, which often lack domain grounding and detailed medical specificity, thus improving alignment with visual features.
Doubly Debiased Test-Time Prompt Tuning for Vision-Language Models
PositiveArtificial Intelligence
The paper discusses the challenges of test-time prompt tuning for vision-language models, highlighting the issue of prompt optimization bias that can lead to suboptimal performance in downstream tasks. It identifies two main causes: the model's focus on entropy minimization, which may overlook prediction accuracy, and data misalignment between visual and textual modalities. To address these issues, the authors propose a new method called Doubly Debiased Test-Time Prompt Tuning, aimed at improving model performance in zero-shot settings.