HyperAI초신경
홈
뉴스
최신 연구 논문
튜토리얼
데이터셋
백과사전
SOTA
LLM 모델
GPU 랭킹
컨퍼런스
전체 검색
소개
한국어
HyperAI초신경
Toggle sidebar
전체 사이트 검색...
⌘
K
홈
SOTA
Efficient Vits
Efficient Vits On Imagenet 1K With Lv Vit S
Efficient Vits On Imagenet 1K With Lv Vit S
평가 지표
GFLOPs
Top 1 Accuracy
평가 결과
이 벤치마크에서 각 모델의 성능 결과
Columns
모델 이름
GFLOPs
Top 1 Accuracy
Paper Title
Repository
MCTF ($r=16$)
3.6
82.3
Multi-criteria Token Fusion with One-step-ahead Attention for Efficient Vision Transformers
MCTF ($r=8$)
4.9
83.5
Multi-criteria Token Fusion with One-step-ahead Attention for Efficient Vision Transformers
DynamicViT (70%)
4.6
83.0
DynamicViT: Efficient Vision Transformers with Dynamic Token Sparsification
Base (LV-ViT-S)
6.6
83.3
All Tokens Matter: Token Labeling for Training Better Vision Transformers
eTPS
3.8
82.5
Joint Token Pruning and Squeezing Towards More Aggressive Compression of Vision Transformers
SPViT
4.3
83.1
SPViT: Enabling Faster Vision Transformers via Soft Token Pruning
DynamicViT (80%)
5.1
83.2
DynamicViT: Efficient Vision Transformers with Dynamic Token Sparsification
EViT (50%)
3.9
82.5
Not All Patches are What You Need: Expediting Vision Transformers via Token Reorganizations
DPS-LV-ViT-S
4.5
82.9
Patch Slimming for Efficient Vision Transformers
-
PS-LV-ViT-S
4.7
82.4
Patch Slimming for Efficient Vision Transformers
-
DiffRate
3.9
82.6
DiffRate : Differentiable Compression Rate for Efficient Vision Transformers
MCTF ($r=12$)
4.2
83.4
Multi-criteria Token Fusion with One-step-ahead Attention for Efficient Vision Transformers
AS-LV-S (60%)
3.9
82.6
Adaptive Sparse ViT: Towards Learnable Adaptive Token Pruning by Fully Exploiting Self-Attention
-
EViT (70%)
4.7
83.0
Not All Patches are What You Need: Expediting Vision Transformers via Token Reorganizations
AS-LV-S (70%)
4.6
83.1
Adaptive Sparse ViT: Towards Learnable Adaptive Token Pruning by Fully Exploiting Self-Attention
-
dTPS
3.8
82.6
Joint Token Pruning and Squeezing Towards More Aggressive Compression of Vision Transformers
BAT
4.7
83.1
Beyond Attentive Tokens: Incorporating Token Importance and Diversity for Efficient Vision Transformers
PPT
4.6
83.1
PPT: Token Pruning and Pooling for Efficient Vision Transformers
DynamicViT (90%)
5.8
83.3
DynamicViT: Efficient Vision Transformers with Dynamic Token Sparsification
0 of 19 row(s) selected.
Previous
Next