Model agnostic interpretability

  • Jessica Cooper

Student thesis: Doctoral Thesis (PhD)

Abstract

This thesis explores the development and application of model-agnostic interpretability methods for deep neural networks. I introduce novel techniques for interpreting trained models irrespective of their architecture, including Centroid Maximisation, an adaptation of feature visualisation for segmentation models; the Proxy Model Test, a new evaluation method for saliency mapping algorithms; and Hierarchical Perturbation (HiPe), a novel saliency mapping algorithm that achieves performance comparable to existing model-agnostic methods while reducing computational cost by a factor of 20. The utility of these interpretability methods is demonstrated through two case studies in digital pathology. The first study applies model-agnostic saliency mapping to generate pixel-level segmentations from weakly-supervised models, while the second study employs interpretability techniques to uncover potential relationships between DNA morphology and protein expression in CD3-expressing cells.
Date of Award3 Dec 2024
Original languageEnglish
Awarding Institution
  • University of St Andrews
SupervisorOggie Arandelovic (Supervisor) & David James Harrison (Supervisor)

Keywords

  • Artificial intelligence
  • Machine learning
  • Interpretability
  • Explainability
  • Histopathology
  • Saliency mapping
  • Segmentation
  • Immune contexture
  • Knowledge discovery

Access Status

  • Full text open

Cite this

'