Training-Efficient
Text-to-Music Generation
with State-Space Modeling

Graduate Institute of Communication Engineering, National Taiwan University
Under Review

Abstract

Recent advances in text-to-music generation (TTM) have yielded high-quality results, but often at the cost of extensive compute and the use of large proprietary internal data. To improve the affordability and openness of TTM training, an open-source generative model backbone that is more training- and data-efficient is needed. In this paper, we constrain the number of trainable parameters in the generative model to match that of the MusicGen-small benchmark (about 300M parameters), and replace its Transformer backbone with the emerging class of state-space models (SSMs). Specifically, we explore different SSM variants for sequence modeling, and compare a single-stage SSM-based design with a decomposable two-stage SSM/diffusion hybrid design. All proposed models are trained from scratch on a purely public dataset comprising 457 hours of CC-licensed music, ensuring full openness. Our experimental findings are three-fold. First, we show that SSMs exhibit superior training efficiency compared to the Transformer counterpart. Second, despite using only 9% of the FLOPs and 2% of the training data size compared to the MusicGen-small benchmark, our model achieves competitive performance in both objective metrics and subjective listening tests based on MusicCaps captions. Finally, our scaling-down experiment demonstrates that SSMs can maintain competitive performance relative to the Transformer baseline even at the same training budget (measured in iterations), when the model size is reduced to four times smaller. We will publicly release the training data, model weights and code to enable reproducibility, hoping to contribute to the democratization of TTM research. To facilitate the democratization of TTM research, the processed caption, model checkpoints, and source code are made publicly available on GitHub via the project page: https://lonian6.github.io/ssmttm/.

Audio Samples Demo

This section shows the generated audio by Prefix SiMBA(100k)/Diffusion and Cross Transformer(400k)/Diffusion. As mentioned in paper, we only modeling on the single-codebook of DAC (κ=1) for training the LM then use the pre-trained Discodiff for second stage refine. All the text prompt are randomly selected from MusicCaps, and audio samples here are randomly selected as well.

Text Prompt SiMBA (100k) Transformer (400k) MusicGen-Small
This is an instrumental backing track of a jazz music piece. There is no singer in this version of the piece. The piano is playing the chords in the minor key while a bass guitar can be heard playing a walking bass line. The rhythmic background consists of acoustic drums playing a jazz swing beat. The atmosphere is delicate. This piece could be playing in the background at a coffee shop. It could be used in the soundtrack of a romantic movie.
The song is an instrumental. The song is medium tempo with a guitar playing solo, steady drum rhythm, groovy bass line and steady bass line. The song is funky and groovy in nature. The song is an ad jingle.
The low quality recording features an electro house song that consists of punchy "4 on the floor" kick pattern, claps, funky electric guitar melody, groovy synth keys, groovy bass that glues everything together and shimmering shakers. It sounds a bit muddy, but also energetic, fun and exciting - like something you would hear in clubs during 70s and 80s a lot.
The low quality recording features an instrumental cover that consists exotic steel pan melody, punchy kick hits layered with claps, groovy piano chord progression and groovy bass. It is in mono and it sounds a bit harsh, but energetic, tropical and exotic.
This music is instrumental. The tempo is medium fast with a melodious keyboard harmony, steady drumming, groovy bass, synthesiser arrangements , electronically articulated sounds and tambourine beats . The melody is harmonious, pleasant, uncomplicated and well layered. This music is Synth Pop.
The song is instrumental. The song is medium tempo with traditional percussion instruments , bongos, piano accompaniment and groovy bass line. The song is improvisational and energetic. The song is jazz fusion and has poor audio quality.
This music is an Electronic dance instrumental. The tempo is medium fast with punchy drum beats and groovy electronic arrangements.The music is buoyant, energetic, electric, pulsating, youthful and vigorous. The thumpy, rhythmic bass and drumming gives it a groovy dance beat.

Quantitative Result

We provide a set of audio sample with their mel spectrograms for quantitative comparison. All the samples are generated from the same text prompt:
"This is the recording of a jazz reggae concert. There is a saxophone lead playing a solo. There is a keyboard and an electric guitar playing the main tune with the backing of a bass guitar. In the rhythmic background, there is an acoustic reggae drum beat. The atmosphere is groovy and chill. This piece could be playing in the background at a beach. It could also be included in the soundtrack of a summer/vacation/tropical themed movie."

SiMBA (100k) SiMBA (50k) Transformer (400k) Transformer (100k)
SiMBA(100k)/Diffusion

SiMBA (100k) / Diffusion

SiMBA(50k)/Diffusion

SiMBA (50k) / Diffusion

Transformer(400k)/Diffusion

Transformer (400k) / Diffusion

Transformer(100k)/Diffusion

Transformer (100k) / Diffusion

BibTeX

@misc{lee2026trainingefficienttexttomusicgenerationstatespace,
    title={Training-Efficient Text-to-Music Generation with State-Space Modeling}, 
    author={Wei-Jaw Lee and Fang-Chih Hsieh and Xuanjun Chen and Fang-Duo Tsai and Yi-Hsuan Yang},
    year={2026},
    eprint={2601.14786},
    archivePrefix={arXiv},
    primaryClass={cs.SD},
    url={https://arxiv.org/abs/2601.14786}, 
}