HyperAI
الرئيسية
الأخبار
أحدث الأوراق البحثية
الدروس
مجموعات البيانات
الموسوعة
SOTA
نماذج LLM
لوحة الأداء GPU
الفعاليات
البحث
حول
العربية
HyperAI
Toggle sidebar
البحث في الموقع...
⌘
K
الرئيسية
SOTA
تصنيف الصور
Image Classification On Objectnet
Image Classification On Objectnet
المقاييس
Top-1 Accuracy
النتائج
نتائج أداء النماذج المختلفة على هذا المعيار القياسي
Columns
اسم النموذج
Top-1 Accuracy
Paper Title
Repository
ResNet-50 + MixUp (rescaled)
28.37
On Mixup Regularization
MoCo-v2 (BG_Swaps)
20.8
Characterizing and Improving the Robustness of Self-Supervised Learning through Background Augmentations
-
AR-B (Opt Relevance)
47.1
Optimizing Relevance Maps of Vision Transformers Improves Robustness
RegViT (RandAug)
29.3
Pyramid Adversarial Training Improves ViT Performance
Vit B/16 (Bamboo)
53.9
Bamboo: Building Mega-Scale Vision Dataset Continually with Human-Machine Synergy
CLIP (CC12M pretrain)
15.24
Robust Cross-Modal Representation Learning with Progressive Self-Distillation
-
MLP-Mixer + Pixel
24.75
Pyramid Adversarial Training Improves ViT Performance
ALIGN
72.2
Combined Scaling for Zero-shot Transfer Learning
-
RegNetY 128GF (Platt)
64.3
Revisiting Weakly Supervised Pre-Training of Visual Perception Models
ViT H/14 (Platt)
60
Revisiting Weakly Supervised Pre-Training of Visual Perception Models
NASNet-A
35.77
ObjectNet: A large-scale bias-controlled dataset for pushing the limits of object recognition models
-
SWAG (ViT H/14)
69.5
Revisiting Weakly Supervised Pre-Training of Visual Perception Models
SwAV (reverse linear probing)
17.71
Measuring the Interpretability of Unsupervised Representations via Quantized Reversed Probing
-
BYOL (BG_RM)
23.9
Characterizing and Improving the Robustness of Self-Supervised Learning through Background Augmentations
-
Inception-v4
32.24
ObjectNet: A large-scale bias-controlled dataset for pushing the limits of object recognition models
-
AlexNet
6.78
ObjectNet: A large-scale bias-controlled dataset for pushing the limits of object recognition models
-
Discrete ViT
29.95
Pyramid Adversarial Training Improves ViT Performance
SwAV (BG_RM)
21.9
Characterizing and Improving the Robustness of Self-Supervised Learning through Background Augmentations
-
MAWS (ViT-H)
72.6
The effectiveness of MAE pre-pretraining for billion-scale pretraining
OBoW (reverse linear probing)
12.23
Measuring the Interpretability of Unsupervised Representations via Quantized Reversed Probing
-
0 of 106 row(s) selected.
Previous
Next