HyperAI

Action Classification On Moments In Time

Métriques

Top 1 Accuracy

Résultats

Résultats de performance de divers modèles sur ce benchmark

Nom du modèle
Top 1 Accuracy
Paper TitleRepository
MoViNet-A539.1MoViNets: Mobile Video Networks for Efficient Video Recognition
MoViNet-A437.9MoViNets: Mobile Video Networks for Efficient Video Recognition
UMT-L (ViT-L/16)48.7Unmasked Teacher: Towards Training-Efficient Video Foundation Models
I3D29.51%Quo Vadis, Action Recognition? A New Model and the Kinetics Dataset
TSN-2Stream-Temporal Segment Networks for Action Recognition in Videos
SRTG r3d-3428.55Learn to cycle: Time-consistent feature discovery for action recognition
MoViNet-A027.5MoViNets: Mobile Video Networks for Efficient Video Recognition
UniFormerV2-L47.8UniFormerV2: Spatiotemporal Learning by Arming Image ViTs with Video UniFormer
TRN-Multiscale28.27Temporal Relational Reasoning in Videos
SRTG r3d-10133.56Learn to cycle: Time-consistent feature discovery for action recognition
AssembleNet34.27%AssembleNet: Searching for Multi-Stream Neural Connectivity in Video Architectures
SRTG r3d-5030.72Learn to cycle: Time-consistent feature discovery for action recognition
CoVeR(JFT-3B)46.1Co-training Transformer with Videos and Images Improves Action Recognition-
EvaNet31.8%Evolving Space-Time Neural Architectures for Videos-
InternVideo2-1B50.9InternVideo2: Scaling Foundation Models for Multimodal Video Understanding
CoVeR(JFT-300M)45.0Co-training Transformer with Videos and Images Improves Action Recognition-
CoST (ResNet-101, 32 frames)32.4%Collaborative Spatiotemporal Feature Learning for Video Action Recognition
OmniVec253.1OmniVec2 - A Novel Transformer based Network for Large Scale Multimodal and Multitask Learning-
MBT (AV)37.3Attention Bottlenecks for Multimodal Fusion
ViViT-L/16x2-ViViT: A Video Vision Transformer
0 of 29 row(s) selected.