HyperAI

Efficient Speech Language Modeling via Energy Distance in Continuous Latent Space

Zhengrui Ma, Yang Feng, Chenze Shao, Fandong Meng, Jie Zhou, Min Zhang
Veröffentlichungsdatum: 5/21/2025
Efficient Speech Language Modeling via Energy Distance in Continuous
  Latent Space
Abstract

We introduce SLED, an alternative approach to speech language modeling byencoding speech waveforms into sequences of continuous latent representationsand modeling them autoregressively using an energy distance objective. Theenergy distance offers an analytical measure of the distributional gap bycontrasting simulated and target samples, enabling efficient training tocapture the underlying continuous autoregressive distribution. By bypassingreliance on residual vector quantization, SLED avoids discretization errors andeliminates the need for the complicated hierarchical architectures common inexisting speech language models. It simplifies the overall modeling pipelinewhile preserving the richness of speech information and maintaining inferenceefficiency. Empirical results demonstrate that SLED achieves strong performancein both zero-shot and streaming speech synthesis, showing its potential forbroader applications in general-purpose speech language models.