HyperAI
HyperAI
Startseite
Plattform
Dokumentation
Neuigkeiten
Forschungsarbeiten
Tutorials
Datensätze
Wiki
SOTA
LLM-Modelle
GPU-Rangliste
Veranstaltungen
Suche
Über
Nutzungsbedingungen
Datenschutzrichtlinie
Deutsch
HyperAI
HyperAI
Toggle Sidebar
Seite durchsuchen…
⌘
K
Command Palette
Search for a command to run...
Plattform
Startseite
SOTA
Zero-Shot-Videosfragebeantwortung
Zeroshot Video Question Answer On Msvd Qa
Zeroshot Video Question Answer On Msvd Qa
Metriken
Accuracy
Confidence Score
Ergebnisse
Leistungsergebnisse verschiedener Modelle zu diesem Benchmark
Columns
Modellname
Accuracy
Confidence Score
Paper Title
Flash-VStream
80.3
3.9
Flash-VStream: Memory-Based Real-Time Understanding for Long Video Streams
Tarsier (34B)
80.3
4.2
Tarsier: Recipes for Training and Evaluating Large Video Description Models
LinVT-Qwen2-VL (7B)
80.2
4.4
LinVT: Empower Your Image-level Large Language Model to Understand Videos
VILA1.5-40B
80.1
-
VILA: On Pre-training for Visual Language Models
SlowFast-LLaVA-34B
79.9
4.1
SlowFast-LLaVA: A Strong Training-Free Baseline for Video Large Language Models
PLLaVA (34B)
79.9
4.2
PLLaVA : Parameter-free LLaVA Extension from Images to Videos for Video Dense Captioning
IG-VLM-34B
79.6
4.1
An Image Grid Can Be Worth a Video: Zero-shot Video Question Answering Using a VLM
TS-LLaVA-34B
79.4
4.1
TS-LLaVA: Constructing Visual Tokens through Thumbnail-and-Sampling for Training-Free Video Large Language Models
PPLLaVA-7B
77.1
4.0
PPLLaVA: Varied Video Sequence Understanding With Prompt Guidance
Elysium
75.8
3.7
Elysium: Exploring Object-level Perception in Videos via MLLM
MovieChat
75.2
2.9
MovieChat: From Dense Token to Sparse Memory for Long Video Understanding
ST-LLM
74.6
3.9
ST-LLM: Large Language Models Are Effective Temporal Learners
MiniGPT4-video-7B
73.92
-
MiniGPT4-Video: Advancing Multimodal LLMs for Video Understanding with Interleaved Visual-Textual Tokens
Video-LaVIT
73.2
3.9
Video-LaVIT: Unified Video-Language Pre-training with Decoupled Visual-Motional Tokenization
VideoGPT+
72.4
3.6
VideoGPT+: Integrating Image and Video Encoders for Enhanced Video Understanding
LLaVA-Mini
70.9
4.0
LLaVA-Mini: Efficient Image and Video Large Multimodal Models with One Vision Token
Video-LLaVA-7B
70.7
3.9
Video-LLaVA: Learning United Visual Representation by Alignment Before Projection
VideoChat2
70.0
3.9
MVBench: A Comprehensive Multi-modal Video Understanding Benchmark
LLaMA-VID-13B (2 Token)
70.0
3.7
LLaMA-VID: An Image is Worth 2 Tokens in Large Language Models
LLaMA-VID-7B (2 Token)
69.7
3.7
LLaMA-VID: An Image is Worth 2 Tokens in Large Language Models
0 of 28 row(s) selected.
Previous
Next
Zeroshot Video Question Answer On Msvd Qa | SOTA | HyperAI