ChuLo: Chunk-Level Key Information Representation for Long Document Processing

Transformer-based models have achieved remarkable success in various NaturalLanguage Processing (NLP) tasks, yet their ability to handle long documents isconstrained by computational limitations. Traditional approaches, such astruncating inputs, sparse self-attention, and chunking, attempt to mitigatethese issues, but they often lead to information loss and hinder the model'sability to capture long-range dependencies. In this paper, we introduce ChuLo,a novel chunk representation method for long document understanding thataddresses these limitations. Our ChuLo groups input tokens using unsupervisedkeyphrase extraction, emphasizing semantically important keyphrase based chunksto retain core document content while reducing input length. This approachminimizes information loss and improves the efficiency of Transformer-basedmodels. Preserving all tokens in long document understanding, especially tokenclassification tasks, is important to ensure that fine-grained annotations,which depend on the entire sequence context, are not lost. We evaluate ourmethod on multiple long document classification tasks and long document tokenclassification tasks, demonstrating its effectiveness through comprehensivequalitative and quantitative analysis. Our implementation is open-sourced onhttps://github.com/adlnlp/Chulo.