Mr Courtney Hilton


Biographical details

Courtney Hilton is a PhD candidate and postgraduate fellow at the University of Sydney.

Research interests

Courtney Hilton's research spans across cognitive science, musicology, and the learning sciences. His PhD work explores how we process structure in music and language, emphasising the parallels in how the human brain solves the various cognitive challenges posed by these uniquely human inventions. This fascinating topic is explored through a combination of experimental (behavioural and electrophysiological) and computational methods. Courtney also has active interest in how this basic cognitive research can inform our understanding of the human brain and mind, how we can teach and learn music, and in how learning music may have beneficial transfer effects to langauge development in children and in clinical populations.

Courtney also has a research agenda in how people learn, and in improving our educational systems and technologies. Current projects touch on:

  • music & mathematics education
  • novel technologies
  • medical education
  • student partnerships
  • learning through teaching, questioning, and explaining

Thesis work

Thesis title: Music, language, and gesture: How the brain uses meter to facilitate structural integration

Supervisors: Micah GOLDWATER , Michael J JACOBSON

Thesis abstract:

Our unique human capacity for music and language requires the ability to integrate sequences of discrete elements into hierarchical structures. The neural resources underlying this structure-building process is hypothesised to be shared (Patel, Nature Neuroscience, 2003), predicting interference effects when we overload this resource with simultaneous linguistic and musical complexity (see Fedorenko et al, Memory and Cognition, 2009). Neural computation also has the property of being rhythmic. That is, our ability to perceive and to process varies as a function of attentional resources that fluctuate rhythmically over time. It is therefore neurally efficient to align these points of maximal attention to those points in a sequence of processing steps requiring the most resources to process. This problem of when to allocate neural resources for structural integration can be solved in the domains of music and language by taking advantage of temporal and sequential regularities. In music, people perceptually extract a beat and organise this beat into a hierarchical meter. And there is a parallel to this in speech rhythms. Meter, I therefore argue, functions as a system to help allocate resources to efficiently process structure (whether musical or linguistic). My PhD work explores this theory of meter experimentally in a series of neuroimaging (electroencephalography) and behavioural experiments. Additionally, this research explores how meter is perceptually established either through an implicit bottom-up process, or by explicit top-down strategy involving mental imagery or physical gesture (relating to the notion of active sensing). This research has implications for basic research in cognitive neuroscience, but also for music education, music theory, and for questions of how music education/interventions may be used in the treatment of language/speech disorders in children with developmental problems or other clinical populations.

For support on your academic profile contact .