SAM 3: Segment Anything with Concepts

arXiv — cs.CVMonday, November 24, 2025 at 5:00:00 AM
  • The Segment Anything Model (SAM) 3 has been introduced as a unified framework capable of detecting, segmenting, and tracking objects in images and videos using concept prompts. This model enhances the Promptable Concept Segmentation (PCS) by utilizing a scalable data engine that generates a dataset with 4 million unique concept labels, significantly improving segmentation accuracy in both images and videos.
  • This advancement is crucial for Meta as it positions the company at the forefront of AI-driven visual recognition technology, doubling the accuracy of previous models and expanding the capabilities of SAM in various applications, including wildlife conservation and historical map analysis.
  • The introduction of SAM 3 highlights a growing trend in AI towards integrating language and vision, as seen in other models that address segmentation granularity and few-shot learning. This reflects a broader movement in the field to enhance model performance through innovative training strategies and diverse datasets, ultimately aiming for more robust and versatile AI systems.
— via World Pulse Now AI Editorial System

Was this article worth reading? Share it

Recommended apps based on your readingExplore all apps
Continue Readings
Sesame Plant Segmentation Dataset: A YOLO Formatted Annotated Dataset
PositiveArtificial Intelligence
A new dataset, the Sesame Plant Segmentation Dataset, has been introduced, featuring 206 training images, 43 validation images, and 43 test images formatted for YOLO segmentation. This dataset focuses on sesame plants at early growth stages, captured under various environmental conditions in Nigeria, and annotated with the Segment Anything Model version 2.

Ready to build your own newsroom?

Subscribe to unlock a personalised feed, podcasts, newsletters, and notifications tailored to the topics you actually care about