Abstract
Emotion recognition through facial expression and non verbal speech represent an important area in affective computing. They have been extensive studied, from classical feature extraction techniques to more recent deep learning approaches. However most of these approaches face two major challenges: (1) robustness – in the face of degradation such as noise, can a model still make correct predictions?, and (2) cross-dataset generalisation – when a model is trained on one dataset, can it be used to make inference on another dataset?. To directly address these challenges, we first propose the application of a Spiking Neural Network (SNN) in predicting emotional states based on facial expression and speech data, then investigate and compare their accuracy when facing data degradation or unseen new input. We evaluate our approach on third-party, publicly available datasets and compare to the state-of-the-art techniques. Our approach demonstrates robustness to noise, where it achieves an accuracy of 56.2% for facial expression recognition (FER) compared to 22.64% and 14.10% for CNN and SVM respectively when input images are degraded with the noise intensity of 0.5, and the highest accuracy of 74.3% for speech emotion recognition (SER) compared to 21.95% of CNN and 14.75% for SVM when audio white noise is applied. For generalisation, our approach achieves consistently high accu- racy of 89% for FER and 70% for SER in cross-dataset evaluation and suggests that it can learn more effective feature representations, which lead to good generalisa- tion of facial features and vocal characteristics across subjects.
Original language | English |
---|---|
Number of pages | 14 |
Journal | Soft Computing |
Volume | First Online |
Early online date | 16 Jan 2021 |
DOIs | |
Publication status | E-pub ahead of print - 16 Jan 2021 |
Keywords
- Spiking neural network
- Facial emotion recognition
- Speech emotion recognition
- Unsupervised learning