Auditory Neuroscience Laboratory

Within: Bosch Institute, Discipline of Physiology

Head of laboratory

On this page:

Introduction to the Lab
Current Research
Recent Selected Publications
Major Collaborations

Introduction to the Lab

The Auditory Neuroscience Laboratory hosts a multidisciplinary research program examining auditory perception, with a focus on spatial perception and the cocktail party problem. Our studies blend bioacoustic, psychophysical, neurophysiological and computational modelling approaches with state-of-the-art virtual space technologies. Broadly speaking, our current work examines the nature of the acoustic cues available to the auditory system, the way in which differences in the spatial locations of the sound sources are exploited, the perceptual encoding of speech and the role of auditory spatial attention. In a number of cases, the results of our pure research have guided the development of practical applications which have been taken up by industries as diverse as hearing aid and commercial games design.

Facilities

The laboratory is home to a large (64 m3) anechoic chamber that is equipped for a wide range of investigations. This chamber has an insertion loss of better than 30dB for sound frequencies greater than 100Hz, rising rapidly to greater than 60dB above 500Hz. It is anechoic down to 200Hz (better than 99% absorption).The anechoic chamber is equipped with a robot arm carrying a small speaker that can be placed at almost any location on the surface of an imaginary sphere (radius 1m) surrounding a test subject located in the middle of the chamber. The robot arm is fully automated with micro-stepper motor controllers and has a placement accuracy of better than 0.1 degrees. ANL also houses a double-walled audiometric booth and other testing spaces, as well as a suite of offices and a meeting room.

ANL also enjoys strong cross-Faculty collaborations. As a consequence, the facilities of the laboratory are significantly augmented by those in the Computer and Augmented Reality Laboratory in Engineering, the Perception Research Group in Psychology, and The Acoustics Research Laboratory in Architecture. Other major national collaborators include the Macquarie University Audiology Clinic and the National Acoustic Laboratories.

Virtual Auditory Space: A Powerful Experimental Tool

The anechoic chamber not only allows for precisely controlled presentation of stimuli in the anechoic free-field, but also provides the ideal environment for high-fidelity bioacoustic recordings. When we detect sounds external to us in the real world–for example, a bird chirping, an approaching car, or someone calling our name–we usually have some idea about where in the space around us the sources of those sounds are located. That is to say, we have a sense that they emanated from somewhere outside of our own body, and that bird is in a tree above us, and that car is approaching us from the left, and our friend is calling to us from behind.

However, when we listen to sounds over headphones, we have the impression that the source is located inside our head. This sense of a lack of externalisation is due to the fact that when we are out in the world, incoming sounds interact with the contours of our outer ears (the pinnae) before reaching our eardrums. Each pinna acts as an acoustic filter that amplifies some frequency components and attenuates others, depending on the direction of incidence of the sound wave. These location-specific filtering characteristics of the outer ears–one's head-related transfer functions (HRTFs)–are highly individualized from person to person and ear to ear. They serve as important 'acoustic fingerprints' which in large measure allow us to resolve ambiguities inherent in other acoustic cues as to a given sound source's location.

Virtual auditory space (VAS) technology reintroduces HRTFs to signals that are presented over headphones. As a result, these signals appear 'spatialized' to the listener. When well-rendered, VAS can be used to place a listener in any kind of auditory environment, from the most basic (e.g. a single sound source in a non-descript free-field environment) to the highly complex (e.g. a reverberant 'room' that might include multiple sounds originating from different locations, or sounds that are moving around the listener). However, individual differences in the filter functions of the outer ear are sufficiently large that using generic, non-individualized filters to create VAS for a particular listener can significantly degrade its perceptual fidelity.

This fact has proved a major stumbling block to the more widespread application of VAS as, until recently, the only way to address the issue was to directly measure the HRTFs of each user. While this method yields the most realistic virtual environments, it is impractical for large-scale projects and commercial endeavours since it is time and resource intensive, and requires specialized facilities. While our lab is able to do just this, our bioacoustic and modelling work has also culminated in a more accessible solution that allows us to predict the precise acoustic filtering characteristics for individual ears using generalized filters as a starting point. This simple task can be performed outside the laboratory using inexpensive headphones and a laptop, making VAS much more practical for a broad range of applications that require realistic virtual auditory environments (e.g. multi-talker communication scenarios, specialized training environments, immersive games systems).

Current Research

The ecumenical nature of the work undertaken at ANL ensures that our research program is constantly evolving in new and unexpected ways. While there are several side-projects afoot, most of the experiments being conducted may be grouped under two large umbrellas.

Auditory spatial perception and the cocktail party problem

Much of our research focuses on the so-called "cocktail party" problem. That is, how are we able to hear out a talker of interest from a noisy backdrop of other sounds competing for our attention? While this is a significant signal processing problem, it is not an effortful task for most people with healthy hearing. However, even mild hearing loss severely impairs an individual's ability to do this effectively, and the most advanced hearing aids are unable to confer much perceptual benefit in these conditions.

We take a multidisciplinary approach to the issue, blending bioacoustic and psychophysical methods with computational modelling to identify the cues that the healthy auditory system uses to selectively focus attention in acoustically lively environments. This includes the examination of a number of basic perceptual questions that have implications for the manner in which much of this information is processed and integrated with other spatial senses (vision in particular). Additionally, we are interested in the mechanisms by which the auditory system accommodates to changes in the inputs produced by age-related changes in ear shape and sensitivity. The outcomes of this research are informing the design of next-generation hearing aids.

Perception of auditory motion

Our sense of auditory motion can be induced either by the motion of our own bodies through an environment containing stationary sound sources, or by our ability to detect and track motion of the sound sources themselves. In most everyday situations, we encounter a complex mixture of both. ANL is currently conducting a range of bioacoustic and psychophysical studies that examine this little understood perceptual-motor capability. As this basic function is known to be degraded in individuals with certain neurological disorders, among them schizophrenia, this research also has implications for the development of a predictive clinical test for these illnesses. Our preliminary work has uncovered both surprising similarities to and differences with the way in which we perceive moving visual stimuli, thereby contributing to both integrated and differentiated models of spatial motion.

Current national competitive grants*

2011 - The effect of multi-sensory and sensory-motor training on auditory accommodation - Carlile S, Australian Research Council Discovery Project ($270,000 over 3 years)
*Grants administered through the University of Sydney

Recent Selected Publications

Carlile S (2014) The plastic ear and perceptual relearning in auditory spatial perception (Review). Frontiers in Neuroscience (Submitted).

Sharma M, Dhamani I, Leung J, & Carlile S (2014) Evaluation of Auditory Attention, Processing and Memory in School-aged Children with Listening Difficulties in Noisy Environments. Journal of Speech, Language and Hearing Research (Submitted).

Carlile S, Fox A, Orchard-Mills E, Leung J, & Alais D (2014) Six degrees of separation - The portal for auditory perception. PNAS (Submitted).

Sankaran N, Leung J, & Carlile S (2014) Effects of virtual speaker density and room reverberation on spatiotemporal thresholds of audio-visual motion. PlosOne (Submitted).

Freeman T, Leung J, Wufong E, Orchard-Mills E, Carlile S, Alais D (2014) Discrimination contours for moving sounds reveal duration and distance cues dominate auditory speed perception. PlosOne (Submitted).

Feinkohl A, Locke S, Leung J, & Carlile S (2014) The effect of velocity on auditory representational momentum. JASA-EL (In Press).

Durin V, Carlile S, Guillon P, Best V, & Kalluri S (2014) Acoustic analysis of the monaural localization cues captured by five different hearing aid styles. J Acoust Soc Am (In Press).

Carlile S, Balachandar K, & Kelly H (2014) Accommodating to new ears: The effects of sensory and sensory-motor feedback. J Acoust Soc Am 135:2002-2011.

Carlile S (2014) The christmas party problem. Audiology Now (55):15-17.

Reijniers J, Vanderelst D, Jin C, Carlile S, & Peremans H (2014) An ideal-observer model of human sound localization. Biological Cybernetics 108:169-181.

Orchard-Mills E, et al. (2013) A Mechanism for Detecting Coincidence of Auditory and Visual Spatial Signals. Multisensory Research 26(4):333-345.

Dhamani I, Leung J, Carlile S, & Sharma M (2013) Switch attention to listen. Nature: Sci Reports 3:1-8.

Carlile S & Blackman T (2013) Relearning auditory spectral cues for locations inside and outside the visual field. J Assoc for Res in Otolaryngol 15:249-263.

Major Collaborations

Dr David Alais
Associate Professor
School of Psychology, The University of Sydney
Research interests: cross-modal (particularly audio-visual) perception


Mr Rick Ballan
Honorary Associate
Discipline of Physiology & The Bosch Institute, The University of Sydney
Research interests: mathematical modeling of pitch perception


Dr Virginia Best
Research Scientist
National Acoustic Laboratories
Research interests: effects of sensorineural hearing loss on spatial hearing, simulation of real-world environments for psychophysical testing


Professor André van Schaik
Head, Bioelectronics and Neuroscience Research Group
MARCS Institute, University of Western Sydney
Research interests: neuroscience, neuromorphic engineering, integrated circuit design, psychophysics


Dr Craig Jin
Associate Professor
School of Electrical and Information Engineering, The University of Sydney
Research interests: spatial-audio perception and coding, computational models of the human auditory system, auditory scene analysis,neuromorphic spike computation