Efficient visualization of multi-dimensional and multi-modal biomedical images for image interpretation and diagnosis.
Professor David Feng, Associate Professor Jinman Kim.
N/A
Public health demand and research advances are pushing healthcare into an era of transformation. With the introduction of next-generation multi-modality medical imaging scanners, new diagnostic capabilities are introduced which are resulting in tremendous advances in patient care. However, these modern scanners are restricted from its full usage capacity due to the limitation in the ability to visualize and understand the myriad of data within for diagnosis, where slice-by-slice display with simple image processing tools are currently the norm. The massive number of images (1000’s) and the complex inter-relations between the functional (PET) and anatomical (CT) images will mean that access to and assimilation of critical data within these images, by the reader and end-user (e.g. neurosurgeon, cardiothoracic surgeon) will become a major problem.The aim of this project is to investigate new possibilities from the availability of co-aligned and complementary information in multi-dimensional PET/CT image scanners, and develop new algorithms and techniques to provide improved image understanding and efficient visualization of these images.
The opportunity ID for this research opportunity is 315