UnSAMv2: Self-Supervised Learning Enables Segment Anything at Any Granularity

arXiv — cs.LGTuesday, November 18, 2025 at 5:00:00 AM
  • UnSAMv2 has been launched to enhance the capabilities of the Segment Anything Model (SAM) by enabling segmentation at any granularity without the need for human annotations. This advancement addresses the limitations of SAM, which often requires manual adjustments for desired detail levels, making it a significant improvement in the field of computer vision.
  • The introduction of UnSAMv2 is crucial as it reduces the dependency on dense annotations, which are costly and time
— via World Pulse Now AI Editorial System

Was this article worth reading? Share it

Recommended apps based on your readingExplore all apps
Continue Readings
Sesame Plant Segmentation Dataset: A YOLO Formatted Annotated Dataset
PositiveArtificial Intelligence
A new dataset, the Sesame Plant Segmentation Dataset, has been introduced, featuring 206 training images, 43 validation images, and 43 test images formatted for YOLO segmentation. This dataset focuses on sesame plants at early growth stages, captured under various environmental conditions in Nigeria, and annotated with the Segment Anything Model version 2.

Ready to build your own newsroom?

Subscribe to unlock a personalised feed, podcasts, newsletters, and notifications tailored to the topics you actually care about