Hi from the Segment Anything team! Today we’re releasing Segment Anything Model 2! It's the first unified model for real-time promptable object segmentation in images and videos! We're releasing the code, models, dataset, research paper and a demo! We're excited to see what everyone builds! https://ai.meta.com/blog/segment-anything-2/
Code, model, data and under Apache 2.0. Impressive.
Curious how this was allowed to be more open source compared to Llama's interesting new take on "open source". Are other projects restricted in some form due to technical/legal issues and the desire is to be more like this project? Or was there an initiative to break the mold this time round?
This argument doesn't make sense to me unless you're talking about the training material. If that is not the case, then how does this argument relate to the license Meta attempts to force on downloaders of LLaMa weights?
Yeah, but it's a signal they aren't thinking of the project as a community project. They are centralizing rights towards themselves in an unequal way. Apache 2.0 without a CLA would be fine otherwise.
Grounded SAM has become an essential tool in my toolbox (for others: it lets you mask any image using a text prompt, only). HUGE thank you to the team at Meta, I can't wait to try SAM2!
I've been supporting non-computational (i.e. scientists) to use and finetune SAM for biological applications, so excited to see how SAM2 performs and how the video aspects work for large image stacks of 3D objects.
Considering the instant flood of noisy issues/PRs on the repo and the limited fix/update support on SAM, are there plans/buy-in for support of SAM2 on the medium-term beyond quick fixes? Either way, thank you to the team for your work on this and the continued public releases!
and extract segments of images where the object are in the image as I understand it?
A segment then is a collection of images that follow each other in time?
So if you have a video comprised of img1, img2, img3, img4
and object shows in img1 and img2 and img4
Can you catch that as a sequence img1, img2, img3, img4 and can you also catch just the object img1, img2, img4 but get some sort of information that there is a break between img2 and img4 - number of images break etc.?
On edit: Or am I totally off about the segment possibilities and what it means?
Or can you only catch img1 and img2 as a sequence?
Yes I did give it a glance, polite and clever HN member, it showed an object in a sequence of images extracted from video, and evidently followed the object from sequence.
Perhaps however my interpretation of what happens here is way off, which is why I asked in an obviously incorrect and stupid way that you have pointed out to me without clarifying exactly why it was incorrect and stupid.
So anyway there is the extraction of the object I referred to, but also seeming to follow the object through sequence of scenes?
So it seems to me that they identify the object and follow it for a contiguous sequence. Img1, img2, img3, img4, is my interpretation incorrect here?
But what I am wondering is - what happens if the object is not in img3 - like perhaps two people talking and shifting viewpoint from person talking to person listening. Person talking is in img1, img2, img4. Can you get that sequence or is it just img1, img2 the sequence.
It says "We extend SAM to video by considering images as a video with a single frame." which I don't know what that means, does it mean that they concatenated all the video frames into a single image and identified the object in them, in which case their example still shows contiguous images without the object ever disappearing so my question still pertains.
So anyway my conclusion is what said when addressing me was wrong, to quote: "what SAM does is immediately apparent when you view the home page" because I (the you addressed) viewed the homepage I wondered about some things? Obviously wrong things that you have identified as being wrong.
And thus my question is: If what SAM does is immediately apparent when you view the home page can you point out where my understanding has failed?
On edit: grammar fixes for last paragraph / question.
> A segment then is a collection of images that follow each other in time?
A segment is a visually distinctive... segment of image, segmentation is basically splitting an image into objects: https://segment-anything.com, as such it has nothing to do with time or video.
Now SAM 2 is about video, so they seem to add object tracking (that is attributing same object to the same segment throughout frames)
The videos in the main article demonstrate that it can track objects in and out of frame (the one with bacteria or the one with boy going around the tree). However they do acknowledge this part of the algorithm can produce incorrect result sometimes (example with the horses).
The answer to your question is img1, img2, img4, as there is no reason to believe that it can only track objects in contiguous sequence.
I wonder if it can be used with security cameras somehow. My cameras currently alert me when they detect motion. It would be neat if this would help cameras become a little smarter. They should alert me only if someone other than a family member is detected.
The recognition logic doesn't have to always be reviewing the video, but only when motion is detected.
I think some cameras already try to do this, however, they are really bad at it.
Texas and Illinois. Both issued massive fines against Facebook for facial recognition, over a decade after FB first launched the feature. Segmentation is I guess usable to identify faces, so may seem too close to facial recognition to launch.
Basically the same issue the EU has with demos not launching there. You fine tech firms under vague laws often enough, and they stop doing business there.
It doesn't matter much because all the real computation happens on the GPU. But you could take their neural network and do inference using any language you want.
1. SAM 2 was trained on 256 A100 GPUs for 108 hours (SAM1 was 68 hrs on same cluster). Taking the upper end $2 A100 cost off gpulist means SAM2 cost ~$50k to train - surprisingly cheap for adding video understanding?
2. new dataset: the new SA-V dataset is "only" 50k videos, with careful attention given to scene/object/geographical diversity incl that of annotators. I wonder if
LAION or Datacomp (AFAICT the only other real players in the open image data space) can reach this standard..
3. bootstrapped annotation: similar to SAM1, a 3 phase approach where 16k initial annotations across 1.4k videos was then expanded to 63k+197k more with SAM 1+2 assistance, with annotation time accelerating dramatically (89% faster than SAM1 only) by the end
4. memory attention: SAM2 is a transformer with memory across frames! special "object pointer" tokens stored in a "memory bank" FIFO queue of recent and prompted frames. Has this been explored in language models? whoa?
A colleague of mine has written up a quick explainer on the key features (https://encord.com/blog/segment-anything-model-2-sam-2/). The memory attention module for keeping track of objects throughout a video is very clever - one of the trickiest problems to solve, alongside occlusion. We've spent so much time trying to fix these issues in our CV projects, now it looks like Meta has done the work for us :-)
> 4. memory attention: SAM2 is a transformer with memory across frames! special "object pointer" tokens stored in a "memory bank" FIFO queue of recent and prompted frames. Has this been explored in language models? whoa?
interesting, how do you think this could be introduced to llm?
I imagine in video some special tokens are preserved in input to next frame, so kind of like llms see previous messages in chat history, but it's filters out to only some category of tokens to limit size of context.
I believe this is trick already borrowed from llm into video space.
(I didn't read the paper, so that's speculation on my side)
I might be minority, but I am not that surprised by the results or the not so significant GPU hours. I've been video segment tracking for a while now using SAM for mask generation and some of the robust academic video-object segmentation models (see CUTIE: https://hkchengrex.com/Cutie/ presented at CVPR this year.)for tracking the mask.
I need to read SAM2 paper, but 4. seems a lot like what Rex has in CUTIE. CUTIE can consistently track segments across video frames even if they get occluded/ go out of frame for a while.
Seems like there's functional overlap between segmentation models and the autofocus algorithms developed by Canon and Sony for their high-end cameras.
The Canon R1 for example will not only continually track a particular object even if partially occluded but will also pre-focus on where it predicts the object will be when it emerged from being totally hidden. It can also be programmed by the user to focus on a particular face to the exclusion of all else.
In many jurisdictions requiring blanket acceptance of cookies to access the whole site is illegal, eg https://ico.org.uk/for-organisations/direct-marketing-and-pr... . Sites have to offer informed consent for nonessential cookies - but equally don't have to ask if the only cookies used are essential. So a popup saying 'Accept cookies?' with no other information doesn't cut it.
You don't need consent for functional cookies that are necessary for the website to work. Anything you are accepting or declining in a cookie popup shouldn't affect the user experience in any major way.
I know a lot of people who reflexively reject all cookies, and the internet indeed does keep working for them.
For those who are interested. Things that can change are:
- ads are personalized (aka more relevant/powerful to make you want things).
- The experience can become slower when accepting all cookies due to the overhead generated by extensive tracking
In essence, there should be no relevant reason for users to accept cookies. Even accepting and rejecting should be equally easy. The only problem is that companies clearly prioritize pushing users to accept cookies because the cookies are valuable to them.
I don't. I see a few sibling comments who don't accept them either. And now I'm curious to know if there's a behavioral age gap - i.e. have the younger crowd been defacto-trained to always accept them?
I tried on the default video (white soccer ball), and it seems to really struggle with the trees in the background, maybe you could benefit of more of such examples.
It looks like it’s working to me. Segmentation isn’t supposed to be used for tracking alone. If you add tracking on top, the uncertainty in the estimated mask for the white ball (which is sometimes getting confused with the wall) would be accounted for and you’d be able to track it well.
The blog post (https://ai.meta.com/blog/segment-anything-2/) mentions tracking as a use case. Similar objects is known to be challenging and they mention it in the Limitations section. In that video, I only used one frame, but in some other tests even when I prompted in several frames as recommended, it didn't really work, still.
Yeah, it's a reasonable expectation since the blog highlights it. Just figure it's worth calling out that SOTA trackers are able to deal with object disappearance well enough that when used with this it would handle things. I'd venture to say that most people doing any kind of tracking aren't relying on their segmentation process.
I’m not sure what you are looking for a reference to exactly, but segmentation as a preprocessing step for tracking has been one of, if not the primary, most typical workflow for decades.
I feel like one could do this with a chain of LLM prompts -- extract the primary subjects or topics from this long document, then prompt again (1 at a time?) to pull out everything related to each topic from the document and collate it into one semantic chunk.
At the very least, a dataset / benchmark centered around this task feels like it would be really useful.
Anyone have any home project ideas (or past work) to apply this to / inspire others?
I was initially thinking the obvious case would be some sort of system for monitoring your plant health. It could check for shrinkage / growth, colour change etc and build some sort of monitoring tool / automated watering system off that.
I used the original SAM (alongside Grounding DINO) to create an ever growing database of all the individual objects I see as I go about my daily life. It automatically parses all the photos I take on my Meta Raybans and my phone along with all my laptop screenshots. I made it for an artwork that's exhibiting in Australia, and it will likely form the basis of many artworks to come.
I haven't put it up on my website yet (and proper documentation is still coming) so unfortunately the best I can do is show you an Instagram link:
Not exactly functional, but fun . Artwork aside it's quite interesting to see your life broken into all its little bits. Provides a new perspective (apparently, there are a lot more teacups in my life than I notice).
After playing with the SAM2 demo for far too long, my immediate thought was: this would be brilliant for things like (accessible, responsive) interactive videos. I've coded up such a thing before[1] but that uses hardcoded data to track the position of the geese, and a filter to identify the swans. When I loaded that raw video into the SAM2 demo it had very little problem tracking the various birds - which would make building the interactivity on top of it very easy, I think.
Sadly my knowledge of how to make use of these models is limited to what I learned playing with some (very ancient) MediaPipe and Tensorflow models. Those models provided some WASM code to run the model in the browser and I was able to find the data from that to pipe it though to my canvas effects[2]. I'd love to get something similar working with SAM2!
Nice! Of particular interest to me is the slightly improved mIoU and 6x speedup on images [1] (though they say the speedup is mainly from the more efficient encoder, so multiple segmentations of the same image presumably would see less benefit?). It would also be nice to get a comparison to original SAM with bounding box inputs - I didn't see that in the paper though I may have missed it.
Hi from Germany. In case you were wondering, we regulated ourselves to the point where I can't even see the demo of SAM2 until some other service than Meta deploys it.
It’s more like “Meta is restricting European access to models even though they don’t have to, because they believe it’s an effective lobbying technique as they try to get EU regulations written to their preference.”
The same thing happened with the Threads app which was withheld from European users last year for no actual technical reason. Now it’s been released and nothing changed in between.
These free models and apps are bargaining chips for Meta against the EU. Once the regulatory situation settles, they’ll do what they always do and adapt to reach the largest possible global audience.
> Meta is restricting European access to models even though they don’t have to
This video segmentation model could be used by self-driving cars to detect pedestrians, or in road traffic management systems to detect vehicles, either of which would make it a Chapter III High-Risk AI System.
And if we instead say it's not specific to those high-risk applications, it is instead a general purpose model - wouldn't that make it a Chapter V General Purpose AI Model?
Obviously you and I know the "general purpose AI models" chapter was drafted with LLMs (and their successors) in mind, rather than image segmentation models - but it's the letter of the law, not the intent, that counts.
> The same thing happened with the Threads app which was withheld from European users last year for no actual technical reason. Now it’s been released and nothing changed in between.
No technical reason, but legal reasons. IIRC it was about cross-account data sharing from Instagram to Threads, which is a lot more dicey legally in the EU than in NA.
No, it really was a legal privacy thing. I worked in privacy at Meta at that time. Everybody was eager to ship it everywhere, but it wasn't worth the wrath of the EU to launch without a clear data separation between IG and threads.
Regulation in this space works exclusively in favor of big tech, not against them. Almost all of that regulation was literally written for the benefit and with aid of the big tech.
I know Illinois and Texas have biometric privacy laws; I would guess it's related to that. (I am in Illinois and cannot access the demo, so I don't know what if anything it's doing which would be in violation.)
- "We extend SAM to video", because is was previously only for images and it's capabilities are being extended to videos
- "by considering images as a video with a single frame", explaining how they support and build upon the previous image functionality
The main assumptions here are that images -> videos is a level up as opposed to being a different thing entirely, and the previous level is always supported.
"retrofit" implies that the ability to handle images was bolted on afterwards. "extend to video" implies this is a natural continuation of the image functionality, so the next part of the sentence is explaining why there is a natural continuation.
One thing its enabled is automated annotations for segmentation, even on out-of-distribution examples. e.g. in the first 7 months of SAM, users on Roboflow used SAM-powered labeling to label over 13 million images, saving over ~21 years[0] of labeling time. That doesn't include labeling from self hosting autodistill[1] for automated annotation either.
As mentioned in another comment I use it all the time for zero-shot segmentation to do quick image collage type work (former FB-folks take their memes very seriously). It’s crazy good at doing plausible separations on parts of an image with no difference at the pixel level.
Someone who knows Creative Suite can comment on what Photoshop can do on this these days, one imagines it’s something, but the SAM stuff is so fast it can run in low-spec settings.
Grounded SAM[1] is extremely useful for segmenting novel classes. The model is larger and not as accurate as specialized models (e.g. any YOLO segmenter), but it's extremely useful for prototyping ideas in ComfyUI. Very excited to try SAM2.
It does detection on the backend and then feeds those bounding boxes into SAM running in the browser. This is a little slow on the first pass but allows the user the adjust the bboxes and get new segmentations in nearly real time, without putting a ton of load on the server. Saved me having to label a bunch of holds with precise masks/polygons (I labeled 10k for the detection model and that was quite enough). I might try using SAM's output to train a smaller model in the future, haven't gotten around to it.
(Site is early in development and not ready for actual users, but feel free to mess around.)
Wonder if I can use this to count my winter wood stock. Before resuscitating my mutilated Python environment, could someone please run this on a photo of stacked uneven bluegum logs to see if it can segment the pieces? OpenCV edge detection does not cut it:
I do have a 2 questions:
1. isn't addressing the video frame by frame expensive?
2. In the web demo when the leg moves fast it loses it's track from the shoe. Does the memory part not throwing some uristics to over come this edge case?
Impressive, wondering if this is now out of the box fast enough to run on iphone. Previous SAM had some community projects such as FastSAM, MobileSAM, EfficientSAM that tried to speed up. Wish when Readme reporting FPS, provided on what hardware it was tested
Interesting how you can bully the model into accepting multiple people as one object, but it keeps trying to down-select to just one person (which you can then fix by adding another annotated frame in).
this is what I was getting at, i tried on my mbp and no luck. might be just an installer issue but I wanted confirmation from someone with more know-how before diving in
Quote: "Sorry Firefox users!
The Firefox browser doesn’t support the video features we’ll need to run this demo. Please try again using Chrome or Safari."