TY - JOUR
T1 - Learning stage-wise GANs for whistle extraction in time-frequency spectrograms
AU - Li, Pu
AU - Roch, Marie A.
AU - Klinck, Holger
AU - Fleishman, Erica
AU - Gillespie, Douglas
AU - Nosal, Eva-Marie
AU - Shiu, Yu
AU - Liu, Xiaobai
N1 - Funding: This work was supported by Michael Weise of the US Office of Naval Research for the support (N000141712867 and N000142112567).
PY - 2023/3/31
Y1 - 2023/3/31
N2 - Whistle contour extraction aims to derive animal whistles from time-frequency spectrograms as polylines. For toothed whales, whistle extraction results can serve as the basis for analyzing animal abundance, species identity, and social activities. During the last few decades, as long-term recording systems have become affordable, automated whistle extraction algorithms were proposed to process large volumes of recording data. Recently, a deep learning-based method demonstrated superior performance in extracting whistles under varying noise conditions. However, training such networks requires a large amount of labor-intensive annotation, which is not available for many species. To overcome this limitation, we present a framework of stage-wise generative adversarial networks (GANs), which compile new whistle data suitable for deep model training via three stages: generation of background noise in the spectrogram, generation of whistle contours, and generation of whistle signals. By separating the generation of different components in the samples, our framework composes visually promising whistle data and labels even when few expert annotated data are available. Regardless of the amount of human-annotated data, the proposed data augmentation framework leads to a consistent improvement in performance of the whistle extraction model, with a maximum increase of 1.69 in the whistle extraction mean F1-score. Our stage-wise GAN also surpasses one single GAN in improving whistle extraction models with augmented data. The data and code will be available at https://github.com/Paul-LiPu/CompositeGAN_WhistleAugment.
AB - Whistle contour extraction aims to derive animal whistles from time-frequency spectrograms as polylines. For toothed whales, whistle extraction results can serve as the basis for analyzing animal abundance, species identity, and social activities. During the last few decades, as long-term recording systems have become affordable, automated whistle extraction algorithms were proposed to process large volumes of recording data. Recently, a deep learning-based method demonstrated superior performance in extracting whistles under varying noise conditions. However, training such networks requires a large amount of labor-intensive annotation, which is not available for many species. To overcome this limitation, we present a framework of stage-wise generative adversarial networks (GANs), which compile new whistle data suitable for deep model training via three stages: generation of background noise in the spectrogram, generation of whistle contours, and generation of whistle signals. By separating the generation of different components in the samples, our framework composes visually promising whistle data and labels even when few expert annotated data are available. Regardless of the amount of human-annotated data, the proposed data augmentation framework leads to a consistent improvement in performance of the whistle extraction model, with a maximum increase of 1.69 in the whistle extraction mean F1-score. Our stage-wise GAN also surpasses one single GAN in improving whistle extraction models with augmented data. The data and code will be available at https://github.com/Paul-LiPu/CompositeGAN_WhistleAugment.
KW - Electrical and electronic engineering
KW - Computer Science applications
KW - Media technology
KW - Signal processing
U2 - 10.1109/tmm.2023.3251109
DO - 10.1109/tmm.2023.3251109
M3 - Article
SN - 1520-9210
VL - 25
SP - 9302
EP - 9314
JO - IEEE Transactions on Multimedia
JF - IEEE Transactions on Multimedia
ER -