Two colleagues analysing visual biomedical data on laptop screen
Research_

Biomedical imaging, visualisation and information technologies

Personalised, preventive and predictive medical technologies
Our research into bio-inspired technologies including biomedical imaging, visualisation and information technologies aims to create a variety of clinical applications to improve people’s health.

Our research focuses on meeting healthcare challenges by developing core theories in information technologies and computer science research, including machine learning, computer vision, data science, artificial intelligence, bioinformatics, information visualisation and behavioural informatics. We work closely with several industry partners, including major hospitals and healthcare companies, conducting research from algorithms to prototypes, all the way to clinical trials and commercialisation.

Breakthroughs in these core theories and enabling techniques will be a major step forward, improving patient care and healthcare infrastructure such as multimedia patient record systems, advanced computer-assisted surgery and treatment, and telehealth for remote patient monitoring.

Our research projects

Our experts: Professor Dagan (David) Feng, Associate Professor Jinman Kim, Dr Ashnil Kumar, Professor Michael Fulham

Industry partners: Department of PET and Nuclear Medicine, Royal Prince Alfred Hospital

One in four people will be affected by cancer in their lifetime. Our research aims to produce cancer disease maps that extract and quantify important disease characteristics from a very large biomedical image data repository. The outcome will vastly improve personalised diagnosis and treatment of these cancers by providing new insights on how some cancers spread and resist our current treatments.

Our experts: Associate Professor Jinman Kim, Dr Younhyun Jung, Professor Michael Fulham

Our collaborator: Shanghai Jiao Tong University, China

Industry partners: Department of PET and Nuclear Medicine, Royal Prince Alfred Hospital; Renji Hospital China; Ruijin Hospital, China.

The next generation of medical imaging scanners are introducing new diagnostic capabilities that improve patient care. These medical images are multi-dimensional (3D), multi-modality (fusion of PET and MRI for example) and also time varying (that is, 3D volumes taken over multiple time points and functional MRI). Our research innovates in coupling volume rendering technologies with machine learning/image processing to render realistic and detailed 3D volumes of the human body. 

Our experts: Associate Professor Weidong (Tom) Cai, Ms Yang Song

Feature-centric content analysis in biomedical images

Great advances in biological tissue labeling and automated microscopic imaging have revolutionised how biologists visualise molecular, sub-cellular, cellular and super-cellular structures and study their respective functions. How to interpret such image datasets in a quantitative and automatic way has become a major challenge in current computational biology. The essential methods of bioimage informatics involve generation, visualisation, analysis and management. This project aims to develop novel algorithms for content analysis in microscopic images, such as segmentation of cell nuclei, detection of certain cell structures and tracing of cell changes over time. Such algorithms would be valuable in turning image data into useful biological knowledge. These studies will focus on computer vision methodologies in feature extraction and learning-based modelling.

Neuroimaging computing in automated detection of the longitudinal brain changes

Neuroimaging technologies, such as MRI, have transformed how we study the brain under normal or pathological conditions. As imaging facilities become increasingly accessible, more and more imaging data are collected from patients with chronic disorders in longitudinal settings. Such big neuroimaging data enables new possibilities to study the brain with high translational impact, such as early detection of the longitudinal changes in the brain, and large-scale evaluation of imaging-based biomarkers. This project aims to develop novel computational methods to automatically detect the longitudinal changes in the brain based on large-scale longitudinal neuroimaging data, using machine-learning and deep-learning techniques.

Context modelling for large-scale medical image retrieval

Content-based medical image retrieval is a valuable mechanism to assist patient diagnosis. Different from text-based search engines, the similarity of images is evaluated based on a comparison between visual features. Consequently, how to best encode the complex visual features in a comparable mathematic form is crucial. Different from the image retrieval techniques proposed for general imaging in the medical domain, disease-specific contexts need to be modelled as the retrieval target. This project aims to study the various techniques of visual feature extraction and context modeling in medical imaging, and to develop new methodologies for content-based image retrieval of various medical applications.

Our experts: Dr Ashnil Kumar, Associate Professor Pablo Fernandez Penas, Associate Professor Jinman Kim, Dr Marina Ali

Industry partners: Westmead Hospital

Australia has one of the highest rates of melanoma in the world. Melanoma can be treated by simple lesion excision if diagnosed at an early stage. Sequential digital dermoscopy imaging is a technique that allows early detection; however, manual visual interpretation by physicians is subjective, with even well-trained physicians showing inter-observer variability. To overcome these limitations, we are investigating machine learning algorithms to develop a computer-aided diagnosis system to detect and track changes of the skin lesions.

Our experts: Dr Na Liu, Associate Professor Jinman Kim, Professor Ralph Nanan, Professor Mohamed Khadra

Industry partners: Telehealth Technology Centre, Nepean Hospital; Charles Perkins Centre Nepean

Telehealth technologies enhance the delivery of healthcare through the introduction of powerful mechanisms such as social networking, notification, patient education/information portals, and patient monitoring, either remotely (by the care team) or by family and friends. Telehealth can be broadly applied to many diseases. As a case study, obesity in multiple family members is common and the importance of a family-based approach to weight management is well known. Our research aims to develop a family-focused application (app) with novel concepts to incentivise the whole family through family (social) networking, gamification, notifications, personalised analytics, goal setting and a reward mechanism. The app will be supported by a remote study nurse to encourage adherence. The proposed app features will need evaluation in regard to usefulness in helping to induce lifestyle change.

Our experts: Associate Professor Xiuying Wang, Dr Hui Cui, Mr Chaojie Zheng

Detection and treatment outcome prediction of malignant tumors and cancers

Recently the research community has seen great success using deep learning for image analysis tasks. For example, the Convolutional Neural Network (CNN) is one of the most widely used methods for object detection/recognition. This project will use a deep learning approach for predictive, prognostic prediction of patient treatment outcomes of malignant brain tumors. The multi-layer convolution of CNN will be utilised for detection and segmentation of tumors/lesions, and the project will include the investigation on the effective features for training and design of more feasible deep learning scheme for treatment outcomes prediction.

Collaborative learning of multimodality medical imaging data

Comprehensive and possibly complementary information embedded in the multimodality images is of great importance for accurate clinical decision-making. Our project aims to develop collaborative learning algorithms to incorporate important features from multimodality images with prior domain knowledge and optimal ultimate solutions to automated subject identification and segmentation.    

Content-oriented deformable image registration

Image registration is a fundamental image processing technique, which spatially registers images that may be obtained from multiple sensors, at different times, or at different view angles. Current image registration methods are either based on image intensity information or based on the extracted feature to derive deformation field. These registration schemas may introduce excessive deformation onto the local important regions, in which the local features are expected to be preserved. Our research focuses on deriving deformation fields according to the image contents for more meaningful and sensible registration. We use temporal medical images as testing data for our research.  

Fusion and analytics of omics data

Clinical testing and assessment in adjunction with diverse medical imaging data are the basis for more effective patient treatment and management. However, how to fully utilise the widely-available omics data including genomics, clinical characteristics, systemic immunity and radiomics is yet to investigate for individualised treatment. This project aims to analyse these multi-stream life data with visual analytics techniques and transform these analyses to our understanding of human health and disease pattern for prognostic medicine.