Music Question Answering On Musicqa
Metrics
BERT Score
BLEU
METEOR
ROUGE
Results
Performance results of various models on this benchmark
Model Name | BERT Score | BLEU | METEOR | ROUGE | Paper Title | Repository |
---|---|---|---|---|---|---|
LLaMA Adapter | 0.895 | 0.273 | 0.334 | 0.413 | LLaMA-Adapter: Efficient Fine-tuning of Language Models with Zero-init Attention | |
LTU | 0.887 | 0.242 | 0.274 | 0.326 | Listen, Think, and Understand | |
MU-LLaMA | 0.901 | 0.306 | 0.385 | 0.466 | Music Understanding LLaMA: Advancing Text-to-Music Generation with Question Answering and Captioning |
0 of 3 row(s) selected.