Authors:R. Michael Winters IV
Publication or Conference Title:M.A. Thesis,. McGill University
Contemporary music research is a data-rich domain, integrating a diversity of approaches to data collection, analysis, and display. Though the idea of using sound to perceive scientific information is not new, using it as a tool to study music is a special case, unfortunately lacking proper development. To explore this prospect, this thesis examines sonification of three types of data endemic to music research: emotion, gesture, and corpora. Emotion is a type of data most closely associated with the emergent field of affective computing, though its study in music began much earlier. Gesture is studied quantitatively using motion capture systems designed to accurately record the movements of musicians or dancers in performance. Corpora designates large databases of music itself, constituting for instance, the collection of string quartets by Beethoven or an individual’s music library. Though the motivations for using sonification differ in each case, as this thesis makes clear, added benefits arise from the shared medium of sound. In the case of emotion, sonification first benefits from the robust literature on the structural and acoustic determinants of musical emotion and the new computational tools designed to recognize it. Sonification finds application by offering systematic and theoretically informed mappings, capable of accurately instantiating computational models, and abstracting the emotional elicitors of sound from a specific musical context. In gesture, sound can be used to represent a performer’s expressive movements in the same medium as the performed music, making relevant visual cues accessible through simultaneous auditory display. A specially designed tool is evaluated for its ability to meet goals of sonification and expressive movement analysis more generally. In the final case, sonification is applied to the analysis of corpora. Playing through Bach’s chorales, Beethoven’s string quartets or Monteverdi’s madrigals at high speeds (up to 104 notes/second) yields characteristically different sounds, and can be applied as a technique for analysis of pitch-transcription algorithms.