Variational Bayesian deep fuzzy models for interpretable classification

Mohit Kumar*, Sukhvir Singh, Juliana Bowles

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

This paper makes a direct contribution to the mathematical theory and practical implementation of interpretable deep fuzzy learning whilst addressing the model interpretability challenge posed by the typical non-interpretable nature of training data. A deep autoencoder, composed of a finite number of stochastic fuzzy filters, is learned using variational Bayes for an efficient representation of high-dimensional feature vectors at different levels of abstraction. The robustness of the deep model towards data outliers is achieved by incorporating heavy-tailed distributed noises in the inference mechanism. The proposed deep stochastic fuzzy autoencoder makes it possible to have an interpretable classification by means of an induced mapping from a non-interpretable feature space to an interpretable parameter space. The mapping induced by our autoencoder can be used to explain the classifier’s predictions by interpreting the non-interpretable high-dimensional feature vectors. The experimental results obtained on various benchmark datasets (e.g., Freiburg Groceries, Caltech-101, Caltech-256, Adobe Panoramas, and MNIST) show that our proposed approach not only outperforms widely used machine learning methods in classification tasks but brings the added value of interpretability as a further advantage.
Original languageEnglish
Article number107900
Number of pages24
JournalEngineering Applications of Artificial Intelligence
Volume132
Early online date25 Jan 2024
DOIs
Publication statusPublished - Jun 2024

Keywords

  • Fuzzy model
  • Variational Bayesian
  • Interpretability
  • Classification

Fingerprint

Dive into the research topics of 'Variational Bayesian deep fuzzy models for interpretable classification'. Together they form a unique fingerprint.

Cite this