HyperAI초신경
홈
뉴스
최신 연구 논문
튜토리얼
데이터셋
백과사전
SOTA
LLM 모델
GPU 랭킹
컨퍼런스
전체 검색
소개
한국어
HyperAI초신경
Toggle sidebar
전체 사이트 검색...
⌘
K
홈
SOTA
이미지 분류
Image Classification On Places205
Image Classification On Places205
평가 지표
Top 1 Accuracy
평가 결과
이 벤치마크에서 각 모델의 성능 결과
Columns
모델 이름
Top 1 Accuracy
Paper Title
Repository
MixMIM-L
69.3
MixMAE: Mixed and Masked Autoencoder for Efficient Pretraining of Hierarchical Vision Transformers
SEER
66.0
Self-supervised Pretraining of Visual Features in the Wild
AutoMix (ResNet-50 Supervised)
64.1
AutoMix: Unveiling the Power of Mixup for Stronger Classifiers
MAE (ViT-H, 448)
66.8
Masked Autoencoders Are Scalable Vision Learners
InternImage-H
71.7%
InternImage: Exploring Large-Scale Vision Foundation Models with Deformable Convolutions
SwAV
56.7%
Unsupervised Learning of Visual Features by Contrasting Cluster Assignments
SAMix (ResNet-50 Supervised)
64.3
Boosting Discriminative Visual Representation Learning with Scenario-Agnostic Mixup
MixMIM-B
68.3
MixMAE: Mixed and Masked Autoencoder for Efficient Pretraining of Hierarchical Vision Transformers
ResNet-50 (Supervised)
53.2%
Unsupervised Learning of Visual Features by Contrasting Cluster Assignments
Barlow Twins (ResNet-50)
54.1%
Barlow Twins: Self-Supervised Learning via Redundancy Reduction
MoCo v2
52.9
Improved Baselines with Momentum Contrastive Learning
SEER (RegNet10B - finetuned - 384px)
69.0
Vision Models Are More Robust And Fair When Pretrained On Uncurated Images Without Supervision
BYOL
54.0
Bootstrap your own latent: A new approach to self-supervised Learning
RegNetY-128GF (Supervised)
62.7
Self-supervised Pretraining of Visual Features in the Wild
SimCLR
53.3
A Simple Framework for Contrastive Learning of Visual Representations
0 of 15 row(s) selected.
Previous
Next