HyperAI
Startseite
Neuigkeiten
Neueste Forschungsarbeiten
Tutorials
Datensätze
Wiki
SOTA
LLM-Modelle
GPU-Rangliste
Veranstaltungen
Suche
Über
Deutsch
HyperAI
Toggle sidebar
Seite durchsuchen…
⌘
K
Startseite
SOTA
Zero Shot Action Recognition
Zero Shot Action Recognition On Ucf101
Zero Shot Action Recognition On Ucf101
Metriken
Top-1 Accuracy
Ergebnisse
Leistungsergebnisse verschiedener Modelle zu diesem Benchmark
Columns
Modellname
Top-1 Accuracy
Paper Title
Repository
SVE
10.9
Semantic Embedding Space for Zero-Shot Action Recognition
-
ResT
58.7
Cross-modal Representation Learning for Zero-shot Action Recognition
-
SJE(Attribute)
12.0
Evaluation of Output Embeddings for Fine-Grained Image Classification
MAXI
78.2
MAtch, eXpand and Improve: Unsupervised Finetuning for Zero-Shot Action Recognition with Language Knowledge
TC-CLIP
85.4
Leveraging Temporal Contextualization for Video Action Recognition
BIKE
86.6
Bidirectional Cross-Modal Knowledge Exploration for Video Recognition with Pre-trained Vision-Language Models
MOV (ViT-B/16)
82.6
Multimodal Open-Vocabulary Video Classification via Pre-Trained Vision and Language Models
-
O2A
30.3
Objects2action: Classifying and localizing actions without any video example
-
X-CLIP
72.0
Expanding Language-Image Pretrained Models for General Video Recognition
ZSECOC
15.1
Zero-Shot Action Recognition With Error-Correcting Output Codes
-
Text4Vis
85.8
Revisiting Classifier: Transferring Vision-Language Models for Video Recognition
VicTR (ViT-B/16)
72.4
VicTR: Video-conditioned Text Representations for Activity Recognition
-
LoCATe-GAT
76.0
LoCATe-GAT: Modeling Multi-Scale Local Context and Action Relationships for Zero-Shot Action Recognition
ESZSL
15.0
An embarrassingly simple approach to zero-shot learning
HAA
14.9
-
-
VideoCoCa
86.6
VideoCoCa: Video-Text Modeling with Zero-Shot Transfer from Contrastive Captioners
-
AURL
58
Alignment-Uniformity aware Representation Learning for Zero-shot Video Classification
-
CLASTER
53.9
CLASTER: Clustering with Reinforcement Learning for Zero-Shot Action Recognition
-
EVA-CLIP-E/14+
83.1
EVA-CLIP: Improved Training Techniques for CLIP at Scale
TS-GCN
34.2
I Know the Relationships: Zero-Shot Action Recognition via Two-Stream Graph Convolutional Networks and Knowledge Graphs
0 of 35 row(s) selected.
Previous
Next