Training-Efficient
Text-to-Music Generation
with State-Space Modeling

Graduate Institute of Communication Engineering, National Taiwan University

Abstract

Recent advances in text-to-music generation (TTM) have yielded high-quality results, but often at the cost of extensive compute and the use of large proprietary internal data. To improve the affordability and openness of TTM training, an open-source generative model backbone that is more training- and data-efficient is needed. In this paper, we constrain the number of trainable parameters in the generative model to match that of the MusicGen-small benchmark (about 300M parameters), and replace its Transformer backbone with the emerging class of state-space models (SSMs). Specifically, we explore different SSM variants for sequence modeling, and compare a single-stage SSM-based design with a decomposable two-stage SSM/diffusion hybrid design. All proposed models are trained from scratch on a purely public dataset comprising 457 hours of CC-licensed music, ensuring full openness. Our experimental findings are three-fold. First, we show that SSMs exhibit superior training efficiency compared to the Transformer counterpart. Second, despite using only 9% of the FLOPs and 2% of the training data size compared to the MusicGen-small benchmark, our model achieves competitive performance in both objective metrics and subjective listening tests based on MusicCaps captions. Finally, our scaling-down experiment demonstrates that SSMs can maintain competitive performance relative to the Transformer baseline even at the same training budget (measured in iterations), when the model size is reduced to four times smaller. We will publicly release the training data, model weights and code to enable reproducibility, hoping to contribute to the democratization of TTM research. To facilitate the democratization of TTM research, the processed caption, model checkpoints, and source code are made publicly available on GitHub via the project page: https://lonian6.github.io/ssmttm/.

Quantitative Results

We provide a set of audio sample with their mel spectrograms for quantitative comparison. All the samples are generated from the same text prompt:
"This is the recording of a jazz reggae concert. There is a saxophone lead playing a solo. There is a keyboard and an electric guitar playing the main tune with the backing of a bass guitar. In the rhythmic background, there is an acoustic reggae drum beat. The atmosphere is groovy and chill. This piece could be playing in the background at a beach. It could also be included in the soundtrack of a summer/vacation/tropical themed movie."

SiMBA (100k) SiMBA (50k) Transformer (400k) Transformer (100k)
SiMBA(100k)/Diffusion

SiMBA (100k) / Diffusion

SiMBA(50k)/Diffusion

SiMBA (50k) / Diffusion

Transformer(400k)/Diffusion

Transformer (400k) / Diffusion

Transformer(100k)/Diffusion

Transformer (100k) / Diffusion

BibTeX

Comming soon...