HyperAI

Decoding Open-Ended Information Seeking Goals from Eye Movements in Reading

Cfir Avraham Hadar, Omer Shubi, Yoav Meiri, Yevgeni Berzak
Veröffentlichungsdatum: 5/13/2025
Decoding Open-Ended Information Seeking Goals from Eye Movements in
  Reading
Abstract

When reading, we often have specific information that interests us in a text.For example, you might be reading this paper because you are curious about LLMsfor eye movements in reading, the experimental design, or perhaps you only careabout the question ``but does it work?''. More broadly, in daily life, peopleapproach texts with any number of text-specific goals that guide their readingbehavior. In this work, we ask, for the first time, whether open-ended readinggoals can be automatically decoded from eye movements in reading. To addressthis question, we introduce goal classification and goal reconstruction tasksand evaluation frameworks, and use large-scale eye tracking for reading data inEnglish with hundreds of text-specific information seeking tasks. We developand compare several discriminative and generative multimodal LLMs that combineeye movements and text for goal classification and goal reconstruction. Ourexperiments show considerable success on both tasks, suggesting that LLMs canextract valuable information about the readers' text-specific goals from eyemovements.