Multimodal Latent Emotion Recognition from Micro-expression and Physiological Signals

Liangfei Zhang, Yifei Qian, Ognjen Arandjelovic, Anthony Zhu

Research output: Working paperPreprint

Abstract

This paper discusses the benefits of incorporating multimodal data for improving latent emotion recognition accuracy, focusing on micro-expression (ME) and physiological signals (PS). The proposed approach presents a novel multimodal learning framework that combines ME and PS, including a 1D separable and mixable depthwise inception network, a standardised normal distribution weighted feature fusion method, and depth/physiology guided attention modules for multimodal learning. Experimental results show that the proposed approach outperforms the benchmark method, with the weighted fusion method and guided attention modules both contributing to enhanced performance.
Original languageUndefined/Unknown
Publication statusPublished - 23 Aug 2023

Keywords

  • cs.CV
  • cs.AI

Cite this