Abstract
Convolutional neural networks (CNNs) have become increasingly popular in recent years because of their ability to tackle complex learning problems such as object detection, and object localization. They are being used for a variety of tasks, such as tissue abnormalities detection and localization, with an accuracy that comes close to the level of human predictive performance in medical imaging. The success is primarily due to the ability of CNNs to extract the discriminant features at multiple levels of abstraction.
Photoacoustic (PA) imaging is a promising new modality that is gaining significant clinical potential. The availability of a large dataset of three dimensional PA images of ex-vivo human prostate and thyroid specimens has facilitated this current study aimed at evaluating the efficacy of CNN for cancer diagnosis. In PA imaging, a short pulse of near-infrared laser light is sent into the tissue, but the image is created by focusing the ultrasound waves that are photoacoustically generated due to the absorption of light, thereby mapping the optical absorption in the tissue. By choosing multiple wavelengths of laser light, multispectral photoacoustic (MPA) images of the same tissue specimen can be obtained. The objective of this thesis is to implement deep learning architecture for cancer detection using the MPA image dataset.
In this study, we built and examined a fully automated deep learning framework that learns to detect and localize cancer regions in a given specimen entirely from its MPA image dataset. The dataset for this work consisted of samples with size ranging from 12 × 45 × 200 pixels to 64 × 64 × 200 pixels at five wavelengths namely, 760 nm, 800 nm, 850 nm, 930 nm, and 970 nm.
The proposed algorithms first extract features using convolutional kernels and then detect cancer tissue using the softmax function, the last layer of the network. The AUC was calculated to evaluate the performance of the cancer tissue detector with a very promising result. To the best of our knowledge, this is one of the first examples of the application of deep 3D CNN to a large cancer MPA dataset for the prostate and thyroid cancer detection.
While previous efforts using the same dataset involved decision making using mathematically extracted image features, this work demonstrates that this process can be automated without any significant loss in accuracy. Another major contribution of this work has been to demonstrate that both prostate and thyroid datasets can be combined to produce improved results for cancer diagnosis.
Library of Congress Subject Headings
Cancer--Diagnosis--Technological innovations; Multispectral imaging; Optoacoustic spectroscopy; Neural networks (Computer science); Convolutions (Mathematics); Machine learning
Publication Date
7-2019
Document Type
Dissertation
Student Type
Graduate
Degree Name
Imaging Science (Ph.D.)
Department, Program, or Center
Chester F. Carlson Center for Imaging Science (COS)
Advisor
Navalgund Rao
Advisor/Committee Member
Pengcheng Shi
Advisor/Committee Member
Vikram Dogra
Recommended Citation
Jnawali, Kamal, "Automatic Cancer Tissue Detection Using Multispectral Photoacoustic Imaging" (2019). Thesis. Rochester Institute of Technology. Accessed from
https://repository.rit.edu/theses/10129
Campus
RIT – Main Campus
Plan Codes
IMGS-PHD