HyperAI
HyperAI초신경
홈
플랫폼
문서
뉴스
연구 논문
튜토리얼
데이터셋
백과사전
SOTA
LLM 모델
GPU 랭킹
컨퍼런스
전체 검색
소개
한국어
HyperAI
HyperAI초신경
Toggle sidebar
전체 사이트 검색...
⌘
K
Command Palette
Search for a command to run...
홈
SOTA
이미지 분류
Image Classification On Omnibenchmark
Image Classification On Omnibenchmark
평가 지표
Average Top-1 Accuracy
평가 결과
이 벤치마크에서 각 모델의 성능 결과
Columns
모델 이름
Average Top-1 Accuracy
Paper Title
Repository
NOAH-ViTB/16
47.6
Neural Prompt Search
SwinTransformer
46.4
Swin Transformer: Hierarchical Vision Transformer using Shifted Windows
Bamboo-R50
45.4
Bamboo: Building Mega-Scale Vision Dataset Continually with Human-Machine Synergy
Adapter-ViTB/16
44.5
Parameter-Efficient Transfer Learning for NLP
CLIP-RN50
42.1
Learning Transferable Visual Models From Natural Language Supervision
IG-1B
40.4
Billion-scale semi-supervised learning for image classification
BiT-M
40.4
Big Transfer (BiT): General Visual Representation Learning
DINO
38.9
Emerging Properties in Self-Supervised Vision Transformers
SwAV
38.3
Unsupervised Learning of Visual Features by Contrasting Cluster Assignments
ResNet-101
37.4
Deep Residual Learning for Image Recognition
MEAL-V2
36.6
MEAL V2: Boosting Vanilla ResNet-50 to 80%+ Top-1 Accuracy on ImageNet without Tricks
MoPro-V2
36.1
MoPro: Webly Supervised Learning with Momentum Prototypes
EfficientNetB4
35.8
EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks
MoCoV2
34.8
Momentum Contrast for Unsupervised Visual Representation Learning
ResNet-50
34.3
Deep Residual Learning for Image Recognition
InceptionV4
32.3
Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning
MLP-Mixer
32.2
MLP-Mixer: An all-MLP Architecture for Vision
Manifold
31.6
Manifold Mixup: Better Representations by Interpolating Hidden States
CutMix
31.1
CutMix: Regularization Strategy to Train Strong Classifiers with Localizable Features
ReLabel
30.8
Re-labeling ImageNet: from Single to Multi-Labels, from Global to Localized Labels
0 of 22 row(s) selected.
Previous
Next