Research Area | Application, Interpretability and Clinical Translation of Deep Learning Algorithms for Medical Images

Pratik Shah

Research studies led by Dr. Shah in his laboratory have created new paradigms for using low-cost images captured using simple optical principles for point-of-care clinical diagnosis, and reducing  dependence on specialized medical imaging devices, biological and chemical processes . Recent peer-reviewed publications have also communicated interpretable tools and methods for generative, predictive and classification algorithms for obtaining medical diagnostic information of cells, tissues, and organs.  For example:

  • Generalizability of deep learning models for segmenting complex patterns from images are not well understood and are based on anecdotal assumptions that increasing training data improved performance.  Research findings led by Dr. Shah published in a Cell Reports Methods paper  reported a novel an end-to-end toolkit for improving generalizability and transparency of clinical-grade DL architectures. Researchers and clinicians can  use this toolkit to identify hidden patterns embedded in images and overcome under specification of key non-disease and clinical labels causal to decreasing false-positive or negative outcomes in high-dimensional learning systems. The key findings from this study focussed on the evaluation of medical images, but these methods and approach should generalize to all other RGB and gray scale natural-world image segmentations. Methods for benchmarking, visualization, and validation of deep learning models and images communicated in this study have wide applications in biomedical research and uncertainty estimations for regulatory science purposes. (Project and publication link)
  • In a collaboration led by Dr. Shah with Brigham and Women’s Hospital in Boston, MA,  a novel “Computational staining” system to digitally stain photographs of unstained tissue biopsies with Haematoxylin and Eosin (H&E) dyes to diagnose cancer was published. This research also described an automated “Computational destaining” algorithm that can remove dyes and stains from photographs of previously stained tissues, allowing reuse of patient samples. This method used neural networks to help physicians provide timely information about the anatomy and structure of the organ and saving time and precious biopsy samples. (Project and publication link)
  •  In a collaboration led by Dr. Shah with Stanford University School of Medicine and Harvard Medical School,  several novel mechanistic insights and methods to facilitate benchmarking and clinical and regulatory evaluations of generative neural networks and computationally H&E stained images were reported. Specifically, high fidelity, explainable, and automated computational staining and destaining algorithms to learn mappings between pixels of nonstained cellular organelles and their stained counterparts were trained.  A novel  and robust loss function was devised for the  deep  learning algorithms to preserve tissue structure. This research communicated that virtual staining neural network models developed in Dr. Shah's research lab were generalizable to accurately stain previously unseen images acquired from patients and tumor grades not part of training data. Neural activation maps in response to various tumors and tissue types were generated to provide the first instance of explainability and mechanisms used by  deep learning models for virtual H&E staining and destaining.  And image processing  analytics and statistical testing were used  to benchmark the quality of generated images.  Finally,  the computationally stained images were successfully evaluated for prostate tumor diagnoses with multiple pathologists  for clinical decision-making.  (Project and publication link)
  •  In a research study led by Dr. Shah, a complementary end-to-end deep learning framework for automatic classification, and localization of prostate tumors from non-stained and virtual H&E stained core biopsy images was  developed. A computationally H&E stained patch was first generated from a non-stained input image using the generative models described above and then was fed into a Resnet-18 classifier for classification as tumor or no tumors. A deep weekly-supervised learning  gradient backpropogation (GBP) algorithm was used to localize class-specific (tumor) regions on images outputted from the Resnet-18 classifier. If an input image patch was classified as tumor, the GBP localization module generates a saliency map) locating the tumor regions on computationally stained images. The core contributions were to extend the utility and performance of generative virtual H&E staining deep learning methods,  models and computationally H&E stained images for tumor localization and classification. (Publication link)
  • In a collaboration led by Dr. Shah with Beth Israel Deaconess Medical Center in Boston, MA, the use of dark field imaging of capillary bed under the tongue of consenting patients in emergency rooms for diagnosing sepsis (a blood borne bacterial infection) was investigated. A neural network capable of distinguishing between images from non-septic and septic patients with more than 90% accuracy was reported for the first time. This approach can rapidly stratify patients and offer rational use of antibiotics and reduce disease burden in hospital emergency rooms and combat antimicrobial resistance. (Project and publication link)
  • Dr. Shah led research studies that showed that signatures associated with fluorescent porphyrin biomarkers (linked with tumors and periodontal diseases) were successfully predicted  from standard white-light photographs of the mouth, thus reducing the need for fluorescent imaging at the point-of-care. (Project  and publication link
  • Research studies led by Dr. Shah reported automated segmentation of oral diseases by neural networks from standard  white-light photographs and correlations  of disease pixels with systemic health conditions such as optic nerve abnormalities in patients for personalized risk scores. (Project and publication link, Project and publication link)

Examples described in this research area highlight contributions from Dr. Shah and his lab  towards designing next-generation of computational medicine algorithms and biomedical  processes that can assist physicians and patients at the point-of-care.