Abstract
Audio data is a highly rich form of information, often containing patterns with unique acoustic signatures. In pervasive sensing environments, because of the empowered smart devices, we have witnessed an increasing research interest in sound sensing to detect ambient environment, recognise users' daily activities, and infer their health conditions. However, the main challenge is that the real-world environment often contains multiple sound sources, which can significantly compromise the robustness of the above environment, event, and activity detection applications. In this paper, we explore different approaches in multi-sound classification, and propose a stacked classifier based on the recent advance in deep learning. We evaluate our proposed approach in a comprehensive set of experiments on both sound effect and real-world datasets. The results have demonstrated that our approach can robustly identify each sound category among mixed acoustic signals, without the need of any a priori knowledge about the number and signature of sounds in the mixed signals.
Original language | English |
---|---|
Title of host publication | 2019 IEEE International Conference on Pervasive Computing and Communications (PerCom) |
Publisher | IEEE Computer Society |
Pages | 1-7 |
Number of pages | 7 |
ISBN (Electronic) | 9781538691489 |
DOIs | |
Publication status | Published - 22 Jul 2019 |
Event | IEEE International Conference on Pervasive Computing and Communications (PerCom 2019) - Kyoto, Japan Duration: 12 Mar 2019 → 14 Mar 2019 Conference number: 17 http://www.percom.org/Previous/ST2019/home.html |
Publication series
Name | Pervasive Computing and Communications (PerCom) |
---|---|
Publisher | IEEE |
ISSN (Print) | 2474-2503 |
ISSN (Electronic) | 2474-249X |
Conference
Conference | IEEE International Conference on Pervasive Computing and Communications (PerCom 2019) |
---|---|
Abbreviated title | PerCom 2019 |
Country/Territory | Japan |
City | Kyoto |
Period | 12/03/19 → 14/03/19 |
Internet address |