HyperAI
HyperAI超神経
ホーム
プラットフォーム
ドキュメント
ニュース
論文
チュートリアル
データセット
百科事典
SOTA
LLMモデル
GPU ランキング
学会
検索
サイトについて
日本語
HyperAI
HyperAI超神経
Toggle sidebar
サイトを検索…
⌘
K
Command Palette
Search for a command to run...
ホーム
SOTA
画像分類
Image Classification On Flowers 102
Image Classification On Flowers 102
評価指標
Accuracy
評価結果
このベンチマークにおける各モデルのパフォーマンス結果
Columns
モデル名
Accuracy
Paper Title
Repository
CCT-14/7x2
99.76
Escaping the Big Data Paradigm with Compact Transformers
VIT-L/16 (Background)
99.75
Reduction of Class Activation Uncertainty with Background Information
CvT-W24
99.72
CvT: Introducing Convolutions to Vision Transformers
Bamboo (ViT-B/16)
99.7
Bamboo: Building Mega-Scale Vision Dataset Continually with Human-Machine Synergy
モデル 36
99.68
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
EffNet-L2 (SAM)
99.65%
Sharpness-Aware Minimization for Efficiently Improving Generalization
ALIGN
99.65%
Scaling Up Visual and Vision-Language Representation Learning With Noisy Text Supervision
BiT-L (ResNet)
99.63
Big Transfer (BiT): General Visual Representation Learning
ConvMLP-S
99.5
ConvMLP: Hierarchical Convolutional MLPs for Vision
ConvMLP-L
99.5
ConvMLP: Hierarchical Convolutional MLPs for Vision
ResNet-152x4-AGC (ImageNet-21K)
99.49
Effect of Pre-Training Scale on Intra- and Inter-Domain Full and Few-Shot Transfer Learning for Natural and Medical X-Ray Chest Images
Wide-ResNet-101 (Spinal FC)
99.30
SpinalNet: Deep Neural Network with Gradual Input
BiT-M (ResNet)
99.30
Big Transfer (BiT): General Visual Representation Learning
CaiT-M-36 U 224
99.1
-
-
Grafit (RegNet-8GF)
99.1%
Grafit: Learning fine-grained image representations with coarse labels
-
TResNet-L
99.1%
TResNet: High Performance GPU-Dedicated Architecture
DAT
98.9%
Domain Adaptive Transfer Learning on Visual Attention Aware Data Augmentation for Fine-grained Visual Categorization
-
GFNet-H-B
98.8
Global Filter Networks for Image Classification
EfficientNet-B7
98.8%
EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks
DeiT-B
98.8%
Training data-efficient image transformers & distillation through attention
0 of 51 row(s) selected.
Previous
Next