Behaviorvlm: Unified Finetuning-free Behavioral Understanding With Vision-language Reasoning
arXiv:2603.12176v1 Announce Type: cross Abstract: Understanding freely moving animal behavior is central to neuroscience, where pose estimation and behavioral understanding form the foundation for linking neural activity to natural actions. Yet both tasks still depend heavily on human annotation or unstable unsupervised pipelines, limiting scalability and reproducibility. We present BehaviorVLM, a unified vision-language framework for pose estimation and behavioral understanding that requires no task-specific finetuning and minimal human labeling by guiding pretrained Vision-Language Models (VLMs) through detailed, explicit, and verifiable reasoning steps. For pose estimation, we leverage quantum-dot-grounded behavioral data and propose a multi-stage pipeline that integrates temporal, spatial, and cross-view reasoning. This design greatly reduces human annotation effort, exposes low-confidence labels through geometric checks such as reprojection error, and produces labels that can later be filtered, corrected, or used to fine-tune downstream pose models. For behavioral understanding, we propose a pipeline that integrates deep embedded clustering for over-segmented behavior discovery, VLM-based per-clip video captioning, and LLM-based reasoning to merge and semantically label behavioral segments. The behavioral pipeline can operate directly from visual information and does not require keypoints to segment behavior. Together, these components enable scalable, interpretable, and label-light analysis of multi-animal behavior.
Popular Products
-
Pet Teeth Cleaning Pen$40.99$27.78 -
Pet Ear Cleaner Cotton Swabs$28.99$19.78 -
Electric Pet Paw Cleaner Cup Low Nois...$137.99$60.78 -
Outdoor Bird Fat Ball Feeder Grease B...$43.99$29.78 -
Pet Collar Camera With Mini Wireless ...$104.99$72.78