Skip to main content
Two colleagues analysing visual biomedical data on laptop screen
Research_

Biomedical imaging, visualisation and information technologies

Personalised, preventive and predictive medical technologies

Biomedical imaging, visualisation and information technologies are the driving forces behind modern healthcare research. We focus on bio-inspired technologies for a variety of clinical applications, to improve people’s health.

Our research focuses on meeting healthcare challenges by developing core theories in information technologies and computer science research, including machine learning, computer vision, data science, artificial intelligence, bioinformatics, information visualisation and behavioural informatics. We work closely with several industry partners, including major hospitals and healthcare companies, conducting research from algorithms to prototypes, all the way to clinical trials and commercialisation.

Breakthroughs in these core theories and enabling techniques will be a major step forward, improving patient care and healthcare infrastructure such as multimedia patient record systems, advanced computer-assisted surgery and treatment, and telehealth for remote patient monitoring.

Our research projects

Our experts: Professor Dagan (David) Feng, Associate Professor Jinman Kim, Dr Ashnil Kumar, Professor Michael Fulham

Industry partners: Department of PET and Nuclear Medicine, Royal Prince Alfred Hospital

One in four people will be affected by cancer in their lifetime. Our research aims to produce cancer disease maps that extract and quantify important disease characteristics from a very large biomedical image data repository. The outcome will vastly improve personalised diagnosis and treatment of these cancers by providing new insights on how some cancers spread and resist our current treatments.

Our experts: Associate Professor Jinman Kim, Dr Younhyun Jung, Professor Michael Fulham

Our collaborator: Shanghai Jiao Tong University, China

Industry partners: Department of PET and Nuclear Medicine, Royal Prince Alfred Hospital; Renji Hospital China; Ruijin Hospital, China.

The next generation of medical imaging scanners are introducing new diagnostic capabilities that improve patient care. These medical images are multi-dimensional (3D), multi-modality (fusion of PET and MRI for example) and also time varying (that is, 3D volumes taken over multiple time points and functional MRI). Our research innovates in coupling volume rendering technologies with machine learning/image processing to render realistic and detailed 3D volumes of the human body. 

Our experts: Associate Professor Weidong (Tom) Cai, Ms Yang Song

Feature-centric content analysis in biomedical images

Great advances in biological tissue labeling and automated microscopic imaging have revolutionised how biologists visualise molecular, sub-cellular, cellular and super-cellular structures and study their respective functions. How to interpret such image datasets in a quantitative and automatic way has become a major challenge in current computational biology. The essential methods of bioimage informatics involve generation, visualisation, analysis and management. This project aims to develop novel algorithms for content analysis in microscopic images, such as segmentation of cell nuclei, detection of certain cell structures and tracing of cell changes over time. Such algorithms would be valuable in turning image data into useful biological knowledge. These studies will focus on computer vision methodologies in feature extraction and learning-based modelling.

Neuroimaging computing in automated detection of the longitudinal brain changes

Neuroimaging technologies, such as MRI, have transformed how we study the brain under normal or pathological conditions. As imaging facilities become increasingly accessible, more and more imaging data are collected from patients with chronic disorders in longitudinal settings. Such big neuroimaging data enables new possibilities to study the brain with high translational impact, such as early detection of the longitudinal changes in the brain, and large-scale evaluation of imaging-based biomarkers. This project aims to develop novel computational methods to automatically detect the longitudinal changes in the brain based on large-scale longitudinal neuroimaging data, using machine-learning and deep-learning techniques.

Context modelling for large-scale medical image retrieval

Content-based medical image retrieval is a valuable mechanism to assist patient diagnosis. Different from text-based search engines, the similarity of images is evaluated based on a comparison between visual features. Consequently, how to best encode the complex visual features in a comparable mathematic form is crucial. Different from the image retrieval techniques proposed for general imaging in the medical domain, disease-specific contexts need to be modelled as the retrieval target. This project aims to study the various techniques of visual feature extraction and context modeling in medical imaging, and to develop new methodologies for content-based image retrieval of various medical applications.

Our experts: Dr Ashnil Kumar, Associate Professor Pablo Fernandez Penas, Associate Professor Jinman Kim, Dr Marina Ali

Industry partners: Westmead Hospital

Australia has one of the highest rates of melanoma in the world. Melanoma can be treated by simple lesion excision if diagnosed at an early stage. Sequential digital dermoscopy imaging is a technique that allows early detection; however, manual visual interpretation by physicians is subjective, with even well-trained physicians showing inter-observer variability. To overcome these limitations, we are investigating machine learning algorithms to develop a computer-aided diagnosis system to detect and track changes of the skin lesions.

Our experts: Dr Na Liu, Associate Professor Jinman Kim, Professor Ralph Nanan, Professor Mohamed Khadra

Industry partners: Telehealth Technology Centre, Nepean Hospital; Charles Perkins Centre Nepean

Telehealth technologies enhance the delivery of healthcare through the introduction of powerful mechanisms such as social networking, notification, patient education/information portals, and patient monitoring, either remotely (by the care team) or by family and friends. Telehealth can be broadly applied to many diseases. As a case study, obesity in multiple family members is common and the importance of a family-based approach to weight management is well known. Our research aims to develop a family-focused application (app) with novel concepts to incentivise the whole family through family (social) networking, gamification, notifications, personalised analytics, goal setting and a reward mechanism. The app will be supported by a remote study nurse to encourage adherence. The proposed app features will need evaluation in regard to usefulness in helping to induce lifestyle change.