HyperAI

Self Supervised Image Classification On 1

المقاييس

Number of Params
Top 1 Accuracy

النتائج

نتائج أداء النماذج المختلفة على هذا المعيار القياسي

جدول المقارنة
اسم النموذجNumber of ParamsTop 1 Accuracy
dinov2-learning-robust-visual-features1100M88.9%
efficient-self-supervised-vision-transformers87M83.9%
ibot-image-bert-pre-training-with-online307M84.8%
architecture-agnostic-masked-image-modeling-80.4%
momentum-contrast-for-unsupervised-visual-77.0%
ibot-image-bert-pre-training-with-online307M86.6%
masked-image-residual-learning-for-scaling-1341M86.2%
simmim-a-simple-framework-for-masked-image658M87.1%
divide-and-contrast-self-supervised-learning-78.2%
architecture-agnostic-masked-image-modeling-84.5%
improving-visual-representation-learning307M88.6%
architecture-agnostic-masked-image-modeling-82.4%
dinov2-learning-robust-visual-features1100M88.5%
designing-bert-for-convolutional-networks60M82.7%
designing-bert-for-convolutional-networks89M84.8%
ibot-image-bert-pre-training-with-online85M84.0%
beit-bert-pre-training-of-image-transformers307M86.3%
masked-feature-prediction-for-self-supervised307M85.7%
architecture-agnostic-masked-image-modeling-84.2%
unsupervised-learning-of-visual-features-by193M82.0%
masked-autoencoders-are-scalable-vision-86.9%
designing-bert-for-convolutional-networks198M86.0%
designing-bert-for-convolutional-networks26M80.6%
emerging-properties-in-self-supervised-vision85M82.8%
towards-sustainable-self-supervised-learning-86.5%
big-self-supervised-models-are-strong-semi795M83.1%
masked-autoencoders-are-scalable-vision632M87.8%
designing-bert-for-convolutional-networks65M83.1%
architecture-agnostic-masked-image-modeling-78.9%
augmenting-sub-model-to-improve-main-model87M83.9%
simmim-a-simple-framework-for-masked-image88M84.0%
momentum-contrast-for-unsupervised-visual-77.3%
unifying-architectures-tasks-and-modalities473M85.6%
architecture-agnostic-masked-image-modeling-78.8%
simmim-a-simple-framework-for-masked-image85M83.8%
an-empirical-study-of-training-self86M83.2%
exploring-target-representations-for-masked632M88.0%
context-autoencoder-for-self-supervised307M86.3%
mc-beit-multi-choice-discretization-for-image86M84.1%
designing-bert-for-convolutional-networks50M84.1%
a-simple-framework-for-contrastive-learning-77.2%
bootstrapped-masked-autoencoders-for-vision307M85.9%
self-supervised-pretraining-of-visual1.3B84.2%
leveraging-large-scale-uncurated-data-for138M74.9%
architecture-agnostic-masked-image-modeling-80.5%
unsupervised-learning-of-visual-features-by182M77.8%
simmim-a-simple-framework-for-masked-image197M85.4%
augmenting-sub-model-to-improve-main-model304M86.1%
designing-bert-for-convolutional-networks44M82.2%
peco-perceptual-codebook-for-bert-pre632M88.3%
mugs-a-multi-granular-self-supervised21M82.6%
student-collaboration-improves-self-83.2%
an-empirical-study-of-training-self304M84.1%
self-supervised-pretraining-of-visual693M83.8%
ibot-image-bert-pre-training-with-online307M87.8%
designing-bert-for-convolutional-networks198M85.4%
improving-visual-representation-learning307M88.1%
masked-image-residual-learning-for-scaling-196M84.8%
architecture-agnostic-masked-image-modeling-82.2%
ibot-image-bert-pre-training-with-online85M84.4%
augmenting-sub-model-to-improve-main-model632M87.2%
vision-models-are-more-robust-and-fair-when10000M85.8%
mugs-a-multi-granular-self-supervised85M84.3%
mugs-a-multi-granular-self-supervised307M85.2%
beit-bert-pre-training-of-image-transformers86M84.6%