Command Palette
Search for a command to run...
Streaming Sequence-to-Sequence Learning with Delayed Streams Modeling
Streaming Sequence-to-Sequence Learning with Delayed Streams Modeling
Neil Zeghidour Eugene Kharitonov Manu Orsini Václav Volhejn Gabriel de Marmiesse Edouard Grave Patrick Pérez Laurent Mazaré Alexandre Défossez
Abstract
We introduce Delayed Streams Modeling (DSM), a flexible formulation for streaming, multimodal sequence-to-sequence learning. Sequence-to-sequence generation is often cast in an offline manner, where the model consumes the complete input sequence before generating the first output timestep. Alternatively, streaming sequence-to-sequence rely on learning a policy for choosing when to advance on the input stream, or write to the output stream. DSM instead models already time-aligned streams with a decoder-only language model. By moving the alignment to a pre-processing step,and introducing appropriate delays between streams, DSM provides streaming inference of arbitrary output sequences, from any input combination, making it applicable to many sequence-to-sequence problems. In particular, given text and audio streams, automatic speech recognition (ASR) corresponds to the text stream being delayed, while the opposite gives a text-to-speech (TTS) model. We perform extensive experiments for these two major sequence-to-sequence tasks, showing that DSM provides state-of-the-art performance and latency while supporting arbitrary long sequences, being even competitive with offline baselines. Code, samples and demos are available at this https URL