This paper makes a direct contribution to the mathematical theory and practical implementation of interpretable deep fuzzy learning whilst addressing the model interpretability challenge posed by the typical non-interpretable nature of training data. A deep autoencoder, composed of a finite number of stochastic fuzzy filters, is learned using variational Bayes for an efficient representation of high-dimensional feature vectors at different levels of abstraction. The robustness of the deep model towards data outliers is achieved by incorporating heavy-tailed distributed noises in the inference mechanism. The proposed deep stochastic fuzzy autoencoder makes it possible to have an interpretable classification by means of an induced mapping from a non-interpretable feature space to an interpretable parameter space. The mapping induced by our autoencoder can be used to explain the classifier’s predictions by interpreting the non-interpretable high-dimensional feature vectors. The experimental results obtained on various benchmark datasets (e.g., Freiburg Groceries, Caltech-101, Caltech-256, Adobe Panoramas, and MNIST) show that our proposed approach not only outperforms widely used machine learning methods in classification tasks but brings the added value of interpretability as a further advantage.
|Number of pages
|Engineering Applications of Artificial Intelligence
|Early online date
|25 Jan 2024
|E-pub ahead of print - 25 Jan 2024
- Fuzzy model
- Variational Bayesian