HyperAIHyperAI
2 months ago

SAM 2: Segment Anything in Images and Videos

Nikhila Ravi, Valentin Gabeur, Yuan-Ting Hu, Ronghang Hu, Chaitanya Ryali, Tengyu Ma, Haitham Khedr, Roman Rädle, Chloe Rolland, Laura Gustafson, Eric Mintun, Junting Pan, Kalyan Vasudev Alwala, Nicolas Carion, Chao-Yuan Wu, Ross Girshick, Piotr Dollár, Christoph Feichtenhofer
SAM 2: Segment Anything in Images and Videos
Abstract

We present Segment Anything Model 2 (SAM 2), a foundation model towardssolving promptable visual segmentation in images and videos. We build a dataengine, which improves model and data via user interaction, to collect thelargest video segmentation dataset to date. Our model is a simple transformerarchitecture with streaming memory for real-time video processing. SAM 2trained on our data provides strong performance across a wide range of tasks.In video segmentation, we observe better accuracy, using 3x fewer interactionsthan prior approaches. In image segmentation, our model is more accurate and 6xfaster than the Segment Anything Model (SAM). We believe that our data, model,and insights will serve as a significant milestone for video segmentation andrelated perception tasks. We are releasing a version of our model, the datasetand an interactive demo.

SAM 2: Segment Anything in Images and Videos | Latest Papers | HyperAI