Research Supervisor Connect

Multimodality Medical Image Segmentation

Summary

Automated delineation of tissues, organs and hot-spot volumes from medical images by adaptively using the complementary information from multiple imaging modalities

Supervisors

Professor David Feng, Dr Yong Xia.

Research location

Computer Science

Program type

Masters/PHD

Synopsis

The enormous amount of medical images produced globally are currently analysed almost still entirely through visual inspection on a slice-by-slice basis. This requires a high degree of skill and concentration, and is time-consuming, expensive, prone to operator bias, and unsuitable for the processing of large-scale research samples. Computer-aided medical image analysis, where segmentation is an essential step, would enable doctors and researchers not only to bypass these issues, but to display and manipulate the information with unprecedented control facilitating further analysis. Multimodality medical imaging, such as PET-CT imaging, offers simultaneously the highly complementary functional (PET) and anatomical (CT) information of patients, and therefore, opens up great potential for achieving more accurate and reliable image segmentation. However, it is also a significant challenge to use the information from multiple modalities in an integral and efficient way. This project aims to provide a systematic solution for multimodality medical image segmentation, using PET-CT images as a case study, for which this project will develop algorithms to automatically delineate tissues, organs and hot-spot volumes. This research will approach to the long standing medical image segmentation problem from the points of view of adaptively using the complementary information from multiple imaging modalities.

Want to find out more?

Opportunity ID

The opportunity ID for this research opportunity is 1193

Other opportunities with Professor David Feng