Samannot: A Memory-efficient, Local, Open-source Framework For Interactive Video Instance Segmentation Based On Sam2
arXiv:2601.11301v1 Announce Type: new Abstract: Current research workflows for precise video segmentation are often forced into a compromise between labor-intensive manual curation, costly commercial platforms, and/or privacy-compromising cloud-based services. The demand for high-fidelity video instance segmentation in research is often hindered by the bottleneck of manual annotation and the privacy concerns of cloud-based tools. We present SAMannot, an open-source, local framework that integrates the Segment Anything Model 2 (SAM2) into a human-in-the-loop workflow. To address the high resource requirements of foundation models, we modified the SAM2 dependency and implemented a processing layer that minimizes computational overhead and maximizes throughput, ensuring a highly responsive user interface. Key features include persistent instance identity management, an automated ``lock-and-refine'' workflow with barrier frames, and a mask-skeletonization-based auto-prompting mechanism. SAMannot facilitates the generation of research-ready datasets in YOLO and PNG formats alongside structured interaction logs. Verified through animal behavior tracking use-cases and subsets of the LVOS and DAVIS benchmark datasets, the tool provides a scalable, private, and cost-effective alternative to commercial platforms for complex video annotation tasks.
Popular Products
-
Pet Oral Repair Toothpaste Gel$59.56$29.78 -
HEPA Portable Air Purifier Filter Air...$236.99$164.78 -
Pet Paw Cleaning Foam Waterless Shamp...$57.99$39.78 -
Mini GPS Tracker Bluetooth Anti-Lost ...$27.99$18.78 -
Smart GPS Waterproof Mini Pet Tracker$59.56$29.78