Semi Supervised Object Detection On Coco 2
Metrics
mAP
Results
Performance results of various models on this benchmark
Comparison Table
Model Name | mAP |
---|---|
efficient-teacher-semi-supervised-object | 28.7 |
unbiased-teacher-for-semi-supervised-object-1 | 24.324.30 ± 0.07 |
mixteacher-mining-promising-labels-with-mixed | 27.88 |
pseco-pseudo-labeling-and-consistency | 27.77 |
omni-detr-omni-supervised-object-detection | 23.2 |
detreg-unsupervised-pretraining-with-region | 18.69±0.2 |
ambiguity-resistant-semi-supervised-learning | 29.08 |
mixed-pseudo-labels-for-semi-supervised | 34.7 |
mixteacher-mining-promising-labels-with-mixed | 29.11 |
semi-supervised-object-detection-via-virtual-1 | 27.70 |
consistent-teacher-provides-better-1 | 30.7 |
rethinking-pseudo-labels-for-semi-supervised | 23.34± 0.18 |
mum-mix-image-tiles-and-unmix-feature-tiles | 24.84 |
unbiased-teacher-v2-semi-supervised-object-1 | 28.37±0.03 |
semi-supervised-object-detection-with-1 | 28.69±0.17 |
instant-teaching-an-end-to-end-semi | 22.45 |
consistency-based-semi-supervised-learning | 13.93 |
adaptive-self-training-for-object-detection | 24.85 |
a-simple-semi-supervised-learning-framework | 18.25±0.25 |