Projects per year
Abstract
Understanding the predictions made by Artificial Intelligence (AI) systems is becoming more and more important as deep learning models are used for increasingly complex and high-stakes tasks. Saliency mapping - an easily interpretable visual attribution method - is one important tool for this, but existing formulations are limited by either computational cost or architectural constraints. We therefore propose Hierarchical Perturbation, a very fast and completely model-agnostic method for explaining model predictions with robust saliency maps. Using standard benchmarks and datasets, we show that our saliency maps are of competitive or superior quality to those generated by existing model-agnostic methods - and are over 20X faster to compute.
Original language | English |
---|---|
Number of pages | 25 |
Publication status | Published - 22 Feb 2021 |
Keywords
- XAI
- AI safety
- Saliency mapping
- Deep learning explanation
- Prediction attribution
Fingerprint
Dive into the research topics of 'Believe the HiPe: Hierarchical Perturbation for fast, robust and model-agnostic explanations'. Together they form a unique fingerprint.Projects
- 2 Finished
-
ICAIRD: I-CAIRD: Industrial Centre for AI Research in Digital Diagnostics
Harris-Birtill, D. C. C. (PI) & Arandelovic, O. (CoI)
1/02/19 → 31/01/23
Project: Standard
-
ICAIRD: I-CAIRD: Industrial Centre for AI Research in Digital Diagnostics
Harrison, D. J. (PI)
1/02/19 → 31/01/22
Project: Standard