Research Supervisor Connect

Fluent Mobility for the Blind Individual Using Auditory Sensory Augmentation

Summary

Supervisor

Associate Professor Craig Jin.

Research location

Biomedical Engineering

Synopsis

The University of Sydney and the University of Technology Sydney are developing novel, multimodal auditory sensory augmentation technologies to assist the visually impaired using assistive technologies based on wearable glasses with machine vision ( https://sites.google.com/view/masa-dec ) . We have a relatively large team exploring: (1) using sensors and machine artificial intelligence to extract information for a targeted objective; (2) rendering this information via the auditory channel as sound; (3) and enabling feedback control via hand/wrist or other sensors. We will be conducting psychophysical experiments to explore behavioural performance and coupling these experiments with EEG and/or fMRI studies. Experiments are typically run using motion capture and the latest AR/VR/XR equipment.

We have a PhD position available for a student with an audio and music background and an interest or background in cognitive science to explore mainly non-verbal audio signals and their real-time rendering to convey spatial and navigational information about the surrounding environment.

The PhD project 's aims include:

  • Establishing a psychoacoustic paradigm to support rapid development and evaluation of novel auditory sensory augmentation techniques. Examples could be walking around obstacles, finding and opening a door, etc.
  • Developing and exploring various non-verbal audio techniques to convey spatial and navigational information. Examples would include: amplitude and pitch changes, Doppler effects, Virtual Reality audio to simulate location in space
  • Conducting psychoacoustic navigationally-oriented experiments using sound stimuli with blind-folded sighted individuals and also people with low-vision or blindness.

Research Environment

Experimental paradigms will be developed and explored in the labs at USyd, UTS and the Human Augmentation Laboratory (https://www.haltechcentral.com.au/). The fMRI will be conducted at Westmead Hospital. We have specialised acoustic facilities available such as anechoic chambers, semi-anechoic chamber, loudspeaker arrays, linear and spherical microphone arrays, AR/VR/XR platforms. As well, we have a wide-variety of behavioural and physiological recording techniques including cardiac monitoring, electrodermal response, EEG and fMRI.  You will have access to mechanical and electronics workshops and a pool of technical staff to help realise your research ambitions. The University of Sydney and University of Technology, Sydney offer a rich academic setting in a world-class city, and we have strong ties to a network of nearby and international academic and industrial collaborators.

Additional information

Offering:

A fully funded 3.5-year PhD scholarship covering tuition fees and a stipend covering living expenses.

What are we looking for?

  •  Experience in music and sound design
  •  Background in Cognitive Science
  •  Familiarity with programming in audio (Python, C#, and Unity)

About You

Successful candidates will have:

  •  A bachelor ’s degree in a relevant discipline
  •  Interest in developing novel auditory sensory augmentation systems and working with AR/VR/XR simulation environments
  •  Excellent communication and interpersonal skills
  •  An interest and familiarity working with audio
  •  Hands-on experience with coding for one or more video game simulation environments would be an asset

Domestic and international applicants are welcome

How to apply

For all enquiries, please email both (David Alais) david.alais@sydney.edu.au  and (Craig Jin) craig.jin@sydney.edu.au .

Want to find out more?

Opportunity ID

The opportunity ID for this research opportunity is 3538

Other opportunities with Associate Professor Craig Jin