This thesis explores the development and application of model-agnostic interpretability methods for deep neural networks. I introduce novel techniques for interpreting trained models irrespective of their architecture, including Centroid Maximisation, an adaptation of feature visualisation for segmentation models; the Proxy Model Test, a new evaluation method for saliency mapping algorithms; and Hierarchical Perturbation (HiPe), a novel saliency mapping algorithm that achieves performance comparable to existing model-agnostic methods while reducing computational cost by a factor of 20. The utility of these interpretability methods is demonstrated through two case studies in digital pathology. The first study applies model-agnostic saliency mapping to generate pixel-level segmentations from weakly-supervised models, while the second study employs interpretability techniques to uncover potential relationships between DNA morphology and protein expression in CD3-expressing cells.
- Artificial intelligence
- Machine learning
- Interpretability
- Explainability
- Histopathology
- Saliency mapping
- Segmentation
- Immune contexture
- Knowledge discovery
Model agnostic interpretability
Cooper, J. (Author). 3 Dec 2024
Student thesis: Doctoral Thesis (PhD)