HyperAIHyperAI
2 months ago

Long Context Transfer from Language to Vision

Peiyuan Zhang, Kaichen Zhang, Bo Li, Guangtao Zeng, Jingkang Yang, Yuanhan Zhang, Ziyue Wang, Haoran Tan, Chunyuan Li, Ziwei Liu
Long Context Transfer from Language to Vision
Abstract

Video sequences offer valuable temporal information, but existing largemultimodal models (LMMs) fall short in understanding extremely long videos.Many works address this by reducing the number of visual tokens using visualresamplers. Alternatively, in this paper, we approach this problem from theperspective of the language model. By simply extrapolating the context lengthof the language backbone, we enable LMMs to comprehend orders of magnitude morevisual tokens without any video training. We call this phenomenon long contexttransfer and carefully ablate its properties. To effectively measure LMMs'ability to generalize to long contexts in the vision modality, we developV-NIAH (Visual Needle-In-A-Haystack), a purely synthetic long vision benchmarkinspired by the language model's NIAH test. Our proposed Long Video Assistant(LongVA) can process 2000 frames or over 200K visual tokens without additionalcomplexities. With its extended context length, LongVA achievesstate-of-the-art performance on Video-MME among 7B-scale models by denselysampling more input frames. Our work is open-sourced athttps://github.com/EvolvingLMMs-Lab/LongVA.

Long Context Transfer from Language to Vision | Latest Papers | HyperAI