Automated call detection for acoustic surveys with structured calls of varying length

Yuheng Wang*, Juan Ye, David L. Borchers

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

5 Downloads (Pure)

Abstract

1. When recorders are used to survey acoustically conspicuous species, identification calls of the target species in recordings is essential for estimating density and abundance. We investigate how well deep neural networks identify vocalisations consisting of phrases of varying lengths, each containing a variable number of syllables. We use recordings of Hainan gibbon (Nomascus hainanus) vocalisations to develop and test the methods.

2. We propose two methods for exploiting the two-level structure of such data. The first combines convolutional neural network (CNN) models with a hidden Markov model (HMM) and the second uses a convolutional recurrent neural network (CRNN). Both models learn acoustic features of syllables via a CNN and temporal correlations of syllables into phrases either via an HMM or recurrent network. We compare their performance to commonly used CNNs LeNet and VGGNet, and support vector machine (SVM). We also propose a dynamic programming method to evaluate how well phrases are predicted. This is useful for evaluating performance when vocalisations are labelled by phrases, not syllables.

3. Our methods perform substantially better than the commonly used methods when applied to the gibbon acoustic recordings. The CRNN has an F-score of 90% on phrase prediction, which is 18% higher than the best of the SVM or LeNet and VGGNet methods. HMM post-processing raised the F-score of these last three methods to as much as 87%. The number of phrases is overestimated by CNNs and SVM, leading to error rates between 49% and 54%. With HMM, these error rates can be reduced to 0.4% at the lowest. Similarly, the error rate of CRNN's prediction is no more than 0.5%.

4. CRNNs are better at identifying phrases of varying lengths composed of a varying number of syllables than simpler CNN or SVM models. We find a CRNN model to be best at this task, with a CNN combined with an HMM performing almost as well. We recommend that these kinds of models are used for species whose vocalisations are structured into phrases of varying lengths.
Original languageEnglish
Pages (from-to)1552-1567
Number of pages16
JournalMethods in Ecology and Evolution
Volume13
Issue number7
Early online date5 May 2022
DOIs
Publication statusPublished - 1 Jul 2022

Keywords

  • Acoustic survey
  • Automated call detection
  • Convolutional recurrent neural network
  • Gibbon calls
  • Hidden Markov model
  • Machine learning

Fingerprint

Dive into the research topics of 'Automated call detection for acoustic surveys with structured calls of varying length'. Together they form a unique fingerprint.

Cite this