Sound Studies Lecture am 29. Oktober 2012
Can music performance metrics be used to improve the perception of information in sonified data and to enhance the expressivity of computer music?Classical cognitive science follows a long philosophical tradition by placing consciousness as the source of knowledge. This ‘mentalist’ approach deals with music as a complex-patterned time-ordered series of disembodied acoustic events that vary in pitch, loudness and timbre;that are absorbed and elicit emotions when listened to. Western art music has embedded this paradigm in compositions that are abstractly composed to be played by expert musicians responsible for producing this music in the concert venues that are themselves designed as spaces to separate performers from a relatively passive audience.
Computer music, which developed alongside and intertwined with classical cognitive science in the second half of the twentieth century, has also been heavily influenced by this paradigm, the assumptions of which are embedded in the software tools used to create it. These tools have been widely adopted by the data sonification and auditory display research communities. The goal of these displays is to enable a better understanding or appreciation of changes and structures in datasets of varying size, dimension and complexity. The most common technique employed to render multivariate datasets to sound is to map psychoacoustic parameters (pitch, loudness, duration, timbre, location) to dataset variables in various ways, much as a composer might score an abstract musical structure. Such rendering frequently reveals a widely-recognised deep perceptual problem, known as The Mapping Problem: In listening to such mappings, when an auditory ‘feature’ appears, it is often difficult to ascertain whether it is actually a feature of the dataset, or just a resultant of the coincidental interaction between co-dependent psychoacoustic parameters.
By emphasising the body as the primary site of knowing the world, some phenomenologists have challenged the classical disembodied paradigm of perception. More recently, neuroscience has discovered biological structures and processes that also challenge its validity, and is developing powerful new theories of the temporal and causal relationships between awareness, perception, conception, intention and action.
This lecture outlines the conceptual, biophysical, musical and auditorydisplay dimensions of The Mapping Problem and proposes an approach
to improving the rendering of sonifications for data display and musical purposes that encodes findings from the research undertaken in neuroscience, gestalt psychology and embodied music cognition. In contrast to an approach in which embodiment for sonification focuses on the interactivity of the user interface to the dataset, this research seeks to exploit aspects of embodiment (such as those transmitted though microtiming) that are tacitly ‚available‘ to listeners through aural means alone
Dr. David Worrall is an experimental composer and designer working in data sonification, sound sculpture and design, software and immersive polymedia, as well as acoustic instrumental composition. He was the Foundation Head of the Australian Centre for the Arts and Technology and is currently Adjunct Research Fellow at the Australian National University in Canberra. David is also a Board Member of the International Community for Auditory Display and Australasian editor for the journal Organised Sound (Cambridge University Press).