WeakSAM: Segment Anything Meets Weakly-supervised Instance-level Recognition

Weakly supervised visual recognition using inexact supervision is a criticalyet challenging learning problem. It significantly reduces human labeling costsand traditionally relies on multi-instance learning and pseudo-labeling. Thispaper introduces WeakSAM and solves the weakly-supervised object detection(WSOD) and segmentation by utilizing the pre-learned world knowledge containedin a vision foundation model, i.e., the Segment Anything Model (SAM). WeakSAMaddresses two critical limitations in traditional WSOD retraining, i.e., pseudoground truth (PGT) incompleteness and noisy PGT instances, through adaptive PGTgeneration and Region of Interest (RoI) drop regularization. It also addressesthe SAM's problems of requiring prompts and category unawareness for automaticobject detection and segmentation. Our results indicate that WeakSAMsignificantly surpasses previous state-of-the-art methods in WSOD and WSISbenchmarks with large margins, i.e. average improvements of 7.4% and 8.5%,respectively. The code is available at \url{https://github.com/hustvl/WeakSAM}.