HyperAI
HyperAI
Home
News
Latest Papers
Tutorials
Datasets
Wiki
SOTA
LLM Models
GPU Leaderboard
Events
Search
About
English
HyperAI
HyperAI
Toggle sidebar
Search the site…
⌘
K
Home
SOTA
Video-based Generative Performance Benchmarking (Temporal Understanding)
Video Based Generative Performance 5
Video Based Generative Performance 5
Metrics
gpt-score
Results
Performance results of various models on this benchmark
Columns
Model Name
gpt-score
Paper Title
Repository
PPLLaVA-7B
3.21
PPLLaVA: Varied Video Sequence Understanding With Prompt Guidance
-
Video Chat
1.94
VideoChat: Chat-Centric Video Understanding
-
VideoChat2
2.66
MVBench: A Comprehensive Multi-modal Video Understanding Benchmark
-
SlowFast-LLaVA-34B
2.77
SlowFast-LLaVA: A Strong Training-Free Baseline for Video Large Language Models
-
LLaMA Adapter
1.98
LLaMA-Adapter V2: Parameter-Efficient Visual Instruction Model
-
PLLaVA-34B
2.67
PLLaVA : Parameter-free LLaVA Extension from Images to Videos for Video Dense Captioning
-
BT-Adapter (zero-shot)
2.13
BT-Adapter: Video Conversation is Feasible Without Video Instruction Tuning
-
Video LLaMA
1.82
Video-LLaMA: An Instruction-tuned Audio-Visual Language Model for Video Understanding
-
TS-LLaVA-34B
2.77
TS-LLaVA: Constructing Visual Tokens through Thumbnail-and-Sampling for Training-Free Video Large Language Models
-
MovieChat
2.24
MovieChat: From Dense Token to Sparse Memory for Long Video Understanding
-
VTimeLLM
2.49
VTimeLLM: Empower LLM to Grasp Video Moments
-
VideoGPT+
2.83
VideoGPT+: Integrating Image and Video Encoders for Enhanced Video Understanding
-
Chat-UniVi
2.39
Chat-UniVi: Unified Visual Representation Empowers Large Language Models with Image and Video Understanding
-
BT-Adapter
2.34
BT-Adapter: Video Conversation is Feasible Without Video Instruction Tuning
-
ST-LLM
2.93
ST-LLM: Large Language Models Are Effective Temporal Learners
-
MiniGPT4-video-7B
2.65
MiniGPT4-Video: Advancing Multimodal LLMs for Video Understanding with Interleaved Visual-Textual Tokens
-
Video-ChatGPT
1.98
Video-ChatGPT: Towards Detailed Video Understanding via Large Vision and Language Models
-
VideoChat2_HD_mistral
2.65
MVBench: A Comprehensive Multi-modal Video Understanding Benchmark
-
0 of 18 row(s) selected.
Previous
Next