Dynamic Scene Understanding from Vision-Language Representations

Images depicting complex, dynamic scenes are challenging to parseautomatically, requiring both high-level comprehension of the overall situationand fine-grained identification of participating entities and theirinteractions. Current approaches use distinct methods tailored to sub-taskssuch as Situation Recognition and detection of Human-Human and Human-ObjectInteractions. However, recent advances in image understanding have oftenleveraged web-scale vision-language (V&L) representations to obviatetask-specific engineering. In this work, we propose a framework for dynamicscene understanding tasks by leveraging knowledge from modern, frozen V&Lrepresentations. By framing these tasks in a generic manner - as predicting andparsing structured text, or by directly concatenating representations to theinput of existing models - we achieve state-of-the-art results while using aminimal number of trainable parameters relative to existing approaches.Moreover, our analysis of dynamic knowledge of these representations shows thatrecent, more powerful representations effectively encode dynamic scenesemantics, making this approach newly possible.