← Back to Search

HumMusQA: A Human-written Music Understanding QA Benchmark Dataset

β˜†β˜†β˜†β˜†β˜†Mar 29, 2026arxiv β†’

Abstract

The evaluation of music understanding in Large Audio-Language Models (LALMs) requires a rigorously defined benchmark that truly tests whether models can perceive and interpret music, a standard that current data methodologies frequently fail to meet. This paper introduces a meticulously structured approach to music evaluation, proposing a new dataset of 320 hand-written questions curated and validated by experts with musical training, arguing that such focused, manual curation is superior for probing complex audio comprehension. To demonstrate the use of the dataset, we benchmark six state-of-the-art LALMs and additionally test their robustness to uni-modal shortcuts.

Explain this paper

Ask this paper

Loading chat…

Rate this paper

Similar Papers