11 days ago
Sparsifying Transformer Models with Trainable Representation Pooling
{Anonymous}
Abstract
We propose a novel method to sparsify attention in the Transformer model by learning to select the most-informative token representations during the training process, thus focusing on the task-specific parts of an input. A reduction of quadratic time and memory complexity to sublinear was achieved due to a robust trainable top-$k$ operator.Our experiments on a challenging long document summarization task show that even our simple baseline performs comparably to the current SOTA, and with trainable pooling we can retain its top quality, while being $1.8 imes$ faster during training, $4.5 imes$ faster during inference and up to $13 imes$ more computationally efficient in the decoder.