HyperAI
HyperAI超神経
ホーム
プラットフォーム
ドキュメント
ニュース
論文
Notebooks
データセット
百科事典
SOTA
LLMモデル
GPU ランキング
学会
検索
サイトについて
利用規約
プライバシーポリシー
日本語
HyperAI
HyperAI
Toggle Sidebar
サイトを検索…
⌘
K
Command Palette
Search for a command to run...
Console
サインイン
ホーム
SOTA
画像分類
Image Classification On Cifar 10
Image Classification On Cifar 10
メトリクス
Percentage correct
結果
このベンチマークにおける各種モデルのパフォーマンス結果
Columns
モデル名
Percentage correct
論文タイトル
コード
DINOv2 (ViT-g/14, frozen model, linear eval)
99.5
DINOv2: Learning Robust Visual Features without Supervision
ViT-H/14
99.5
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
µ2Net (ViT-L/16)
99.49
An Evolutionary Approach to Dynamic Introduction of Tasks in Large-scale Multitask Learning Systems
ViT-L/16
99.42
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
CaiT-M-36 U 224
99.4
-
CvT-W24
99.39
CvT: Introducing Convolutions to Vision Transformers
BiT-L (ResNet)
99.37
Big Transfer (BiT): General Visual Representation Learning
RDNet-L (224 res, IN-1K pretrained)
99.31
DenseNets Reloaded: Paradigm Shift Beyond ResNets and ViTs
RDNet-B (224 res, IN-1K pretrained)
99.31
DenseNets Reloaded: Paradigm Shift Beyond ResNets and ViTs
ViT-B (attn fine-tune)
99.3
Three things everyone should know about Vision Transformers
Heinsen Routing + BEiT-large 16 224
99.2
An Algorithm for Routing Vectors in Sequences
ViT-B/16 (PUGD)
99.13
Perturbated Gradients Updating within Unit Space for Deep Learning
Astroformer
99.12
Astroformer: More Data Might not be all you need for Classification
CeiT-S (384 finetune resolution)
99.1
Incorporating Convolution Designs into Visual Transformers
TNT-B
99.1
Transformer in Transformer
DeiT-B
99.1
Training data-efficient image transformers & distillation through attention
EfficientNetV2-L
99.1
EfficientNetV2: Smaller Models and Faster Training
AutoFormer-S | 384
99.1
AutoFormer: Searching Transformers for Visual Recognition
VIT-L/16 (Spinal FC, Background)
99.05
Reduction of Class Activation Uncertainty with Background Information
LaNet
99.03
Sample-Efficient Neural Architecture Search by Learning Action Space for Monte Carlo Tree Search
0 of 264 row(s) selected.
Previous
Next
HyperAI
HyperAI超神経
ホーム
プラットフォーム
ドキュメント
ニュース
論文
Notebooks
データセット
百科事典
SOTA
LLMモデル
GPU ランキング
学会
検索
サイトについて
利用規約
プライバシーポリシー
日本語
HyperAI
HyperAI
Toggle Sidebar
サイトを検索…
⌘
K
Command Palette
Search for a command to run...
Console
サインイン
ホーム
SOTA
画像分類
Image Classification On Cifar 10
Image Classification On Cifar 10
メトリクス
Percentage correct
結果
このベンチマークにおける各種モデルのパフォーマンス結果
Columns
モデル名
Percentage correct
論文タイトル
コード
DINOv2 (ViT-g/14, frozen model, linear eval)
99.5
DINOv2: Learning Robust Visual Features without Supervision
ViT-H/14
99.5
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
µ2Net (ViT-L/16)
99.49
An Evolutionary Approach to Dynamic Introduction of Tasks in Large-scale Multitask Learning Systems
ViT-L/16
99.42
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
CaiT-M-36 U 224
99.4
-
CvT-W24
99.39
CvT: Introducing Convolutions to Vision Transformers
BiT-L (ResNet)
99.37
Big Transfer (BiT): General Visual Representation Learning
RDNet-L (224 res, IN-1K pretrained)
99.31
DenseNets Reloaded: Paradigm Shift Beyond ResNets and ViTs
RDNet-B (224 res, IN-1K pretrained)
99.31
DenseNets Reloaded: Paradigm Shift Beyond ResNets and ViTs
ViT-B (attn fine-tune)
99.3
Three things everyone should know about Vision Transformers
Heinsen Routing + BEiT-large 16 224
99.2
An Algorithm for Routing Vectors in Sequences
ViT-B/16 (PUGD)
99.13
Perturbated Gradients Updating within Unit Space for Deep Learning
Astroformer
99.12
Astroformer: More Data Might not be all you need for Classification
CeiT-S (384 finetune resolution)
99.1
Incorporating Convolution Designs into Visual Transformers
TNT-B
99.1
Transformer in Transformer
DeiT-B
99.1
Training data-efficient image transformers & distillation through attention
EfficientNetV2-L
99.1
EfficientNetV2: Smaller Models and Faster Training
AutoFormer-S | 384
99.1
AutoFormer: Searching Transformers for Visual Recognition
VIT-L/16 (Spinal FC, Background)
99.05
Reduction of Class Activation Uncertainty with Background Information
LaNet
99.03
Sample-Efficient Neural Architecture Search by Learning Action Space for Monte Carlo Tree Search
0 of 264 row(s) selected.
Previous
Next
Console
Console