Video Based Generative Performance 4

평가 지표

gpt-score

평가 결과

이 벤치마크에서 각 모델의 성능 결과

모델 이름
gpt-score
Paper TitleRepository
Video-ChatGPT2.52Video-ChatGPT: Towards Detailed Video Understanding via Large Vision and Language Models-
Video LLaMA2.18Video-LLaMA: An Instruction-tuned Audio-Visual Language Model for Video Understanding-
MiniGPT4-video-7B3.02MiniGPT4-Video: Advancing Multimodal LLMs for Video Understanding with Interleaved Visual-Textual Tokens-
Video Chat2.50VideoChat: Chat-Centric Video Understanding-
LLaMA Adapter2.32LLaMA-Adapter V2: Parameter-Efficient Visual Instruction Model-
ST-LLM3.05ST-LLM: Large Language Models Are Effective Temporal Learners-
SlowFast-LLaVA-34B2.96SlowFast-LLaVA: A Strong Training-Free Baseline for Video Large Language Models-
MovieChat2.93MovieChat: From Dense Token to Sparse Memory for Long Video Understanding-
Chat-UniVi2.91Chat-UniVi: Unified Visual Representation Empowers Large Language Models with Image and Video Understanding-
VTimeLLM3.10VTimeLLM: Empower LLM to Grasp Video Moments-
TS-LLaVA-34B3.03TS-LLaVA: Constructing Visual Tokens through Thumbnail-and-Sampling for Training-Free Video Large Language Models-
BT-Adapter (zero-shot)2.46BT-Adapter: Video Conversation is Feasible Without Video Instruction Tuning-
VideoChat22.88MVBench: A Comprehensive Multi-modal Video Understanding Benchmark-
VideoGPT+3.18VideoGPT+: Integrating Image and Video Encoders for Enhanced Video Understanding-
BT-Adapter2.69BT-Adapter: Video Conversation is Feasible Without Video Instruction Tuning-
PLLaVA-34B3.20PLLaVA : Parameter-free LLaVA Extension from Images to Videos for Video Dense Captioning-
VideoChat2_HD_mistral2.86MVBench: A Comprehensive Multi-modal Video Understanding Benchmark-
PPLLaVA-7B3.56PPLLaVA: Varied Video Sequence Understanding With Prompt Guidance-
0 of 18 row(s) selected.