EdgeTAM: On-Device Track Anything Model
On high of Segment Anything Model (SAM), SAM 2 further extends its functionality from picture to video inputs via a memory bank mechanism and obtains a outstanding performance in contrast with previous methods, making it a foundation mannequin for video segmentation task. In this paper, we purpose at making SAM 2 way more environment friendly so that it even runs on cellular devices while maintaining a comparable performance. Despite a number of works optimizing SAM for higher effectivity, we find they aren't enough for SAM 2 as a result of they all focus on compressing the image encoder, while our benchmark reveals that the newly introduced reminiscence consideration blocks are also the latency bottleneck. Given this remark, we suggest EdgeTAM, which leverages a novel 2D Spatial Perceiver to reduce the computational cost. Particularly, the proposed 2D Spatial Perceiver encodes the densely stored body-level recollections with a lightweight Transformer that comprises a fixed set of learnable queries.
Given that video segmentation is a dense prediction activity, we discover preserving the spatial structure of the memories is essential so that the queries are cut up into global-level and patch-level teams. We additionally suggest a distillation pipeline that further improves the performance without inference overhead. DAVIS 2017, MOSE, SA-V val, iTagPro geofencing and ItagPro SA-V check, whereas working at sixteen FPS on iPhone 15 Pro Max. SAM to handle both image and video inputs, iTagPro geofencing with a memory bank mechanism, and is skilled with a new massive-scale multi-grained video monitoring dataset (SA-V). Despite achieving an astonishing performance compared to previous video object segmentation (VOS) fashions and allowing extra various person prompts, SAM 2, as a server-side foundation mannequin, is not environment friendly for iTagPro geofencing on-system inference. CPU and NPU. Throughout the paper, iTagPro geofencing we interchangeably use iPhone and iPhone 15 Pro Max for simplicity.. SAM for better efficiency only consider squeezing its picture encoder since the mask decoder is extraordinarily lightweight. SAM 2. Specifically, SAM 2 encodes past frames with a reminiscence encoder, iTagPro shop and travel security tracker these body-level recollections along with object-level pointers (obtained from the mask decoder) serve as the memory financial institution.
These are then fused with the options of current body by way of reminiscence consideration blocks. As these memories are densely encoded, ItagPro this results in a huge matrix multiplication in the course of the cross-consideration between current body options and reminiscence features. Therefore, regardless of containing relatively fewer parameters than the picture encoder, the computational complexity of the reminiscence attention shouldn't be affordable for on-gadget inference. The hypothesis is further proved by Fig. 2, iTagPro geofencing the place reducing the number of reminiscence consideration blocks almost linearly cuts down the general decoding latency and within every memory attention block, eradicating the cross consideration offers the most significant pace-up. To make such a video-based mostly monitoring mannequin run on device, in EdgeTAM, we take a look at exploiting the redundancy in movies. To do this in practice, we suggest to compress the uncooked body-stage memories before performing reminiscence attention. We begin with naïve spatial pooling and iTagPro geofencing observe a significant efficiency degradation, particularly when utilizing low-capacity backbones.
However, naïvely incorporating a Perceiver additionally leads to a extreme drop in efficiency. We hypothesize that as a dense prediction task, the video segmentation requires preserving the spatial structure of the memory financial institution, which a naïve Perceiver discards. Given these observations, we propose a novel lightweight module that compresses body-stage reminiscence characteristic maps while preserving the 2D spatial structure, named 2D Spatial Perceiver. Specifically, we break up the learnable queries into two teams, the place one group functions similarly to the unique Perceiver, iTagPro geofencing where each question performs world attention on the input options and outputs a single vector as the body-stage summarization. In the opposite group, the queries have 2D priors, i.e., each question is just responsible for compressing a non-overlapping native patch, thus the output maintains the spatial construction while decreasing the full variety of tokens. Along with the architecture enchancment, we additional propose a distillation pipeline that transfers the information of the highly effective trainer SAM 2 to our scholar model, which improves the accuracy for free of charge of inference overhead.
We find that in each levels, aligning the features from picture encoders of the unique SAM 2 and our environment friendly variant advantages the performance. Besides, we additional align the function output from the reminiscence attention between the teacher SAM 2 and our student model in the second stage so that in addition to the picture encoder, reminiscence-associated modules can also receive supervision alerts from the SAM 2 trainer. SA-V val and take a look at by 1.3 and 3.3, respectively. Putting together, we propose EdgeTAM (Track Anything Model for Edge units), that adopts a 2D Spatial Perceiver for efficiency and knowledge distillation for accuracy. Through complete benchmark, we reveal that the latency bottleneck lies in the reminiscence attention module. Given the latency evaluation, we suggest a 2D Spatial Perceiver that significantly cuts down the memory attention computational price with comparable performance, which may be integrated with any SAM 2 variants. We experiment with a distillation pipeline that performs function-smart alignment with the original SAM 2 in each the image and video segmentation levels and observe performance enhancements without any further price throughout inference.