HyperAI

Video Polyp Segmentation On Sun Seg Hard

Métriques

Dice
S-Measure
Sensitivity
mean E-measure
mean F-measure
weighted F-measure

Résultats

Résultats de performance de divers modèles sur ce benchmark

Nom du modèle
Dice
S-Measure
Sensitivity
mean E-measure
mean F-measure
weighted F-measure
Paper TitleRepository
ACSNet0.7080.7830.6180.7870.6840.636--
PNSNet0.6750.7670.5790.7550.6560.609Progressively Normalized Self-Attention Network for Video Polyp Segmentation
DCF0.3170.5140.3640.5220.3030.263Dynamic Context-Sensitive Filtering Network for Video Salient Object Detection-
AMD0.2520.4720.2130.5270.1410.128The Emergence of Objectness: Learning Zero-Shot Segmentation from Videos
PraNet0.5980.7170.5120.7350.6070.544PraNet: Parallel Reverse Attention Network for Polyp Segmentation
UNet++--0.467---UNet++: A Nested U-Net Architecture for Medical Image Segmentation
SALI0.8220.8740.8300.9200.8220.790SALI: Short-term Alignment and Long-term Interaction Network for Colonoscopy Video Polyp Segmentation
MAT0.7120.7850.5790.7550.6450.578MATNet: Motion-Attentive Transition Network for Zero-Shot Video Object Segmentation
COSNet0.6060.6700.3800.6270.5060.443See More, Know More: Unsupervised Video Object Segmentation with Co-Attention Siamese Networks
AutoSAM0.7590.8220.7260.8660.7640.714AutoSAM: Adapting SAM to Medical Images by Overloading the Prompt Encoder-
2/3D0.7060.7860.6070.7750.6880.634--
SANet0.5980.7060.5050.7430.5800.526Shallow Attention Network for Polyp Segmentation
PNS+0.7370.7970.6230.7930.7090.653Video Polyp Segmentation: A Deep Learning Perspective
PCSA0.5840.6820.4150.6600.5100.443--
LGRNet0.865-----LGRNet: Local-Global Reciprocal Network for Uterine Fibroid Segmentation in Ultrasound Videos-
YOLO-SAM 20.9020.8940.8520.9410.932-Self-Prompting Polyp Segmentation in Colonoscopy using Hybrid Yolo-SAM 2 Model
FSNet0.6990.7240.4910.6940.6110.541Full-Duplex Strategy for Video Object Segmentation
UNet--0.429---U-Net: Convolutional Networks for Biomedical Image Segmentation
0 of 18 row(s) selected.