About Associate Professor Jinman Kim

My research is in the development of machine learning algorithms optimized for multi-modal biomedical image analysis and visualization.

My research expertise is in machine learning (including cascaded, ensemble, unsupervised feature learning) for the segmentation, detection, retrieval, classification and visualization of multi-modal biomedical images.

My research, in deep collaboration with number of hospitals, has produced high impact research outcomes with many of the outcomes being used in a hospital setting / clinical trial. I am involved in multiple research grants (both National and International), including the ARC Industrial Training Centre in Innovative Bioengineering (as a Theme leader in medical imaging).

<html />

Selected publications

1. J. Kim, W. Cai, D. Feng and S. Eberl. “Segmentation of Volume of Interest from Multi- Dimensional Dynamic PET Images by Integrating Spatial and Temporal Features”, IEEE Transactions on Information Technology in Biomedicine 10(4):637-46, 2006. (47 citations. This journal has been renamed to IEEE Journal of Biomedical Health Informatics.  Impact Factor: 3.451)Statement: This paper proposed the first method that quantitatively improved image segmentation by combining noisy signals from both spatial and temporal image domains. This integration of signals enabled accurate segmentation of noisy images with reduced artefacts.
2. J. Schmid, J. Kim and N. Magnenat-Thalmann. “Robust Statistical Shape Models for MRI Bone Segmentation in Presence of Small Image Field of View”, Medical Image Analysis, 15(1):155-68, 2010. (80 citations. Impact Factor: 4.188)Statement: This was the first work using incomplete (erroneous) data to construct a robust multi-resolution statistical shape model for medical image segmentation, a process which previously required complete data. This is important because most medical image datasets are incomplete.
3. Y. Song, W. Cai, J. Kim and D.D. Feng, “A Multi-Stage Discriminative Model for Tumor and Lymph Node Detection in Thoracic Images”, IEEE Transactions on Medical Imaging, 31(5):1061-75, 2012. (37 citations. Impact Factor: 3.942)Statement: This paper presented a new methodology in which complementary features from multiple image modalities could be used in the classification of different tissue types. This was a new method that enabled the differentiation of primary tumour sites from nodal tumour sites, potentially providing new insights into the spread of disease.
4. L. Bi, J. Kim, A. Kumar, L. Wen, D. Feng and M. Fulham, “Automatic Detection and Classification of Regions of FDG Uptake in Whole-Body PET-CT Lymphoma Studies”, Computerized Medical Imaging and Graphics, 60:3-10, 2017. (9 citations, Impact Factor: 1.738)Statement: This paper was an invited submission, and was an extension of a paper (“Adaptive Supervoxel Patch-based Region Classification in Whole-Body PET-CT”) that won the Best Paper Award at MICCAI workshop on Computational Methods for Molecular Imaging (CMMI), 2015. The paper introduced the use of the adaptive superpixel patch classification allowing adaptation to different shapes and the inherent irregular structures in medical images in contrast to conventional approaches for general data which used regular (rectangular) sliding window patches. 
5. A Kumar, J Kim, D Lyndon, M Fulham, D Feng, “An ensemble of fine-tuned convolutional neural networks for medical image classification”, IEEE Journal of Biomedical and Health Informatics 21 (1), 31-40 (36 citations, Impact Factor: 3.451)Statement: This paper was published in a special issue on deep learning. It was a pioneering work on using an ensemble of different deep models, each with their own specific strengths, to enable medical image data classification at accuracies that could not be achieved by individual models operating in isolation. Since publication in 0217, the paper has received very rapid citations.
6. A Kumar, J. Kim, L Wen, M Fulham and D Feng, “A graph-based approach for the retrieval of multi-modality medical images”, Medical Image Analysis, 18(2):330-42, 2014. (26 citations. Impact Factor: 4.188)Statement: This paper proposed the first graph model that exploited the complementary features in multi-modality image data to enable image retrieval that corresponded to clinical (semantic) guidelines, e.g., disease stage in lung cancer that was based on proximity of tumours to anatomical structures.Image Visualisation
7. J. Kim, W. Cai, S. Eberl and D. Feng. “Real-time Volume Rendering Visualization of Dual- modality PET/CT Images with Interactive Fuzzy Thresholding Segmentation”, IEEE T Info Tech Biomed, 11(2): 161-9, 2007. (36 citations. Impact Factor: 2.072.)Statement: A novel visualisation framework for PET- CT that enabled interactive segmentation and visualisation of regions of interest in medical images, improving the clinician’s workflow efficiency.
8. Y Jung, J Kim, S Eberl, M Fulham, D Feng, “Visibility-driven PET-CT visualisation with region of interest (ROI) segmentation”, The Visual Computer Journal, 29(6):805-15, 2013. (15 citations. Impact Factor: 1.468)Statement: The first visualization algorithm to quantify the ‘occlusion’ caused from objects in one modality onto the other, allowing adjustment of the level of occlusion, to ensure the visibility of important structures from either modality.
9. Y. Jung, J. Kim, A. Kumar, D. Feng, and M. Fulham, “Feature of Interest‐Based Direct Volume Rendering Using Contextual Saliency‐Driven Ray Profile Analysis”, Computer Graphics Forum, 2018. doi: 10.1111/cgf.13308 (Impact Factor: 1.611)Statement: This recently published paper presented a new algorithm, based on the use of saliency algorithm to detect visually important areas of an image for use in automated transfer function generation. This work was invited to be presented at the CGF 2018 conference. 
10. Y. Jung, J. Kim, D. Feng, and M. Fulham, "Occlusion and Slice-based Volume Rendering Augmentation for PET-CT," in IEEE Journal of Biomedical and Health Informatics, 21(4): 1005-1014, 2017. (4 citations, Impact Factor: 3.451)