Verbal Data-Driven Semantic Dimensions of Musical Timbre
* Presenting author
The overall goal of this study is to better model semantic spaces and audio features that are particular to individual types of musical sound (e.g., sustained versus impulsive) and musician (e.g., violinists versus pianists), and which account for changes in musical context (i.e., horizontal and vertical combinations of pitches, dynamics, durations and articulations). As a first step, a Web-based survey using 64 short (on average 20 s) instrumental solo excerpts from recorded music (4 types of instruments x 4 excerpts x 4 recordings) was designed to obtain free-format verbal descriptions of violin, clarinet, piano, and guitar timbre from musicians describing their own as well as other instruments. We present preliminary results of a linguistic analysis, associated with psychological theories of perception and sensory categorization, aiming to understand the descriptions as a whole (e.g., linguistic resources, frequency distribution, discrimination ability) and to derive the emerging instrument-dependent and -independent semantic spaces that characterize the descriptions themselves. These results will form the basis for statistical models that not only predict semantic attributes of musical sounds, but also assess the extent to which these are universal or specific to instrument, musician, and context.