Disease Map - Big Data driven modelling and derivation of diseases and treatment response


BMIT excels at addressing bio-inspired and other real-world challenges with core computing and information technology research in image processing and informatics, computer vision, big data fusion and analysis, visualization and visual analytics, multimedia technology and intelligent algorithms etc. Our research has numerous applications, including in the biomedical and health domains where we have reshaped the biomedical research and digital healthcare practice landscape in several ways.Students will join a strong research team and closely work with a multi-disciplinary supervisory team. They will be able to contribute to other related projects and build networks with other students, postdocs, clinicians and scientists. Our projects involve close collaboration with both hospitals (e.g., Royal Prince Alfred, Westmead, Nepean) and industry partners (e.g., Microsoft) in both Australia and abroad to translate algorithms into clinical trials and commercial applications; there will be opportunities for internships with our partner organisations.


Professor David Feng, Associate Professor Jinman Kim

Research Location

Computer Science

Program Type



One in four people will be affected by cancer in their lifetime. Our research aims to produce computationally derived cancer disease maps that extract and quantify important disease characteristics from a very large biomedical image data repository. The outcome will vastly improve personalised diagnosis and treatment of these cancers by providing new insights on how some cancers spread and are unique to individuals.

Additional Information

Topic 1. Modelling tumour growth and spread in PET-CT imaging dataPET-CT is regarded as the imaging modality of choice for the evaluation, staging and assessment of response in most of cancers. It is also common for PET-CT scans to be acquired at intervals during treatment to monitor the patient’s response to therapy, e.g., whether the cancer is shrinking/growing, spreading to other sites. In diseases such as lymphoma, there can be dozens or hundreds of sites of disease, some of which may change independently to other sites during the treatment process (e.g., some sites may grow while others shrink). The current technique to quantify the changes is to either report on the disease burden as a whole or to manually analyse each site, which is not feasible as the number of disease sites increase.In this project, we will derive a new deep learning technique for modelling changes across multiple disease sites through integrating convolutional neural networks (for analyzing image data) and recurrent neural networks (for analyzing temporal information). Ultimately, this will provide additional information to physicians when assessing patient response to therapy.Topic 2. Functional structure detection in PET-CT imaging dataPET-CT is regarded as the imaging modality of choice for the evaluation, staging and assessment of response in most of cancers. Sites of disease (abnormalities) usually comprise of high uptakes (hot spot) and other visual characteristics such as shape, volume, localization etc. Existing methods for detecting abnormalities are reliant on modelling the characteristics of these abnormalities; however this is challenging due to the inconsistent image-omic (image / visual) features, their varying anatomical localization information, and due to the similarity to some normal structures that also exhibit high uptakes.In this project, we aim to develop a new approach to automatically detect abnormalities, in a reverse manner, through the filtering (removal) of normal / known structures that occurs in the human body. We will pioneer state-of-the-art deep learning algorithms to iteratively filter out known structures, and thus detecting abnormal structures as the output. This project could significantly improve the segmentation and classification performance of the existing methods and potentially increase the confidence in diagnosis for the physicians in a clinical environment.Topic 3. Robust segmentation and classification of multi-modal medical imaging dataDeep learning methods based on convolutional neural networks (CNN) have recently achieved great success in image classification, object detection and segmentation problems. This success is primarily attributed to the capability of CNNs to learn image feature representations that carry a high-level of semantic meaning. Therefore, many investigators have attempted to adapt deep learning methods to medical image segmentation and classification related problems. However, comparatively, there is a scarcity of annotated medical image training data due to the large cost and complications in manual annotations of medical images. Consequently, without sufficient training data to cover all the variations, e.g., lesions from different patients can have major differences in size/shape/texture, deep learning methods cannot provide accurate results.In this project, we will derive a new approach to train an accurate deep learning model for medical images with limited data. More specifically, we will develop deep learning based data argumentation approach to derive additional information and features which can boost the training process. Ultimately, this project can potentially change the existing way of training a medical imaging based deep model and minimize the cost of building the training datasets.

Want to find out more?

Contact us to find out what’s involved in applying for a PhD. Domestic students and International students

Contact Research Expert to find out more about participating in this opportunity.

Browse for other opportunities within the Computer Science .


medical image analysis, deep learning, Machine learning, Image processing, image modelling, computed aided diagnosis

Opportunity ID

The opportunity ID for this research opportunity is: 2425

Other opportunities with Professor David Feng

Other opportunities with Associate Professor Jinman Kim