Believe the HiPe: hierarchical Perturbation for fast, robust, and model-agnostic saliency mapping

Jessica Cooper*, Ognjen Arandjelović, David J Harrison

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

1 Downloads (Pure)

Abstract

Understanding the predictions made by Artificial Intelligence (AI) systems is becoming more and more important as deep learning models are used for increasingly complex and high-stakes tasks. Saliency mapping – a popular visual attribution method – is one important tool for this, but existing formulations are limited by either computational cost or architectural constraints. We therefore propose Hierarchical Perturbation, a very fast and completely model-agnostic method for interpreting model predictions with robust saliency maps. Using standard benchmarks and datasets, we show that our saliency maps are of competitive or superior quality to those generated by existing model-agnostic methods – and are over 20× faster to compute.
Original languageEnglish
Article number108743
Number of pages11
JournalPattern Recognition
Volume129
Early online date2 May 2022
DOIs
Publication statusPublished - Sept 2022

Keywords

  • XAI
  • AI safety
  • Saliency mapping
  • Deep learning explanation
  • Interpretability
  • Prediction attribution

Fingerprint

Dive into the research topics of 'Believe the HiPe: hierarchical Perturbation for fast, robust, and model-agnostic saliency mapping'. Together they form a unique fingerprint.

Cite this