Whole slide
imaging (WSI) or virtual microscopy is a type of imaging modality which is used
to convert animal or human pathology tissue slides to digital images for
teaching, research or clinical applications. This method is popular due to
education and clinical demands. Although modern whole slide scanners can now
scan tissue slides with high resolution in a relatively short period of time,
significant challenges, including high cost of equipment and data storage,
remain unsolved. Machine learning and deep learning techniques in Computer
Aided Diagnosis (CAD) platforms have begun to be widely used for biomedical
image analysis by physicians and researchers. We are trying to build a platform
for histopathological image super-resolution and cancer grading and staging
with the main focus on pancreatic cancer. We present a computational approach
for improving the quality of the resolution of images acquired from commonly
available low magnification commercial slide scanners. Images from such
scanners can be acquired cheaply and are efficient in terms of storage and data
transfer. However, they are generally of poorer quality than images from high-resolution
scanners and microscopes and do not have the necessary resolution needed in
diagnostic or clinical environments, and hence are not used in such settings.
First, we developed a deep learning framework that implements regularized
sparse coding to smoothly reconstruct high-resolution images, given their
low-resolution counterpart. Results show that our method indeed produces images
which are similar to images from high resolution scanners, both in quality and
quantitative measures and compares favorably to several state-of-the-art
methods across a number of test images. To further improve the results, we used
a convolutional neural network (CNN) based approach, which is specifically
trained to take low-resolution slide scanner images of cancer data and convert
it into a high-resolution image. We validate these resolution improvements with
computational analysis to show the enhanced images offer the same quantitative
results. This project is still ongoing and now we are trying to use middle
resolutions for improving the image quality using recurrent neural networks. On
the other hand, current approaches for pathological grading/staging of many
cancer types such as breast and pancreatic cancer lack accuracy and
interobserver agreement. Google research recently used inception for high
accuracy tumor cell localization. However, as our group has been discovering
the prognostic role of stromal reorganization in different cancer types
including pancreatic cancer, which is projected to be the second leading cause
of cancer by 2030, we use a wholistic approach that contains both stroma and
cell from small TMA punches of different grades of cancer accompanied by normal
samples. For this study we used transfer learning from four award winning
networks VGG16, VGG19, GoogleNet and Resnet101 for the task of pancreatic
cancer grading. Although all these networks have shown great performance for
natural image classifications, but Resnet showed the highest performance with
88% accuracy in four-tier grading, and higher for all one by one comparisons
among normal and different grades. We fine-trained this network again for
different TNM classification and staging tasks and although all the images were
selected from small regions from pancreas, the results show the promising
capability of CNNs in helping pathologists with diagnosis. To achieve higher
accuracies we have almost doubled the size of the dataset and trainings are
still running and will update the audience in future talks.