Recurrent Neural Networks for Polyphonic Sound Event Detection in Real Life Recordings
Abstract
In this paper we present an approach to polyphonic sound event detection in real life recordings based on bi-directional long short term memory (BLSTM) recurrent neural networks (RNNs). A single multilabel BLSTM RNN is trained to map acoustic features of a mixture signal consisting of sounds from multiple classes, to binary activity indicators of each event class. Our method is tested on a large database of real-life recordings, with 61 classes (e.g. music, car, speech) from 10 different everyday contexts. The proposed method outperforms previous approaches by a large margin, and the results are further improved using data augmentation techniques. Overall, our system reports an average F1-score of 65.5% on 1 second blocks and 64.7% on single frames, a relative improvement over previous state-of-the-art approach of 6.8% and 15.1% respectively.
Research areas- Year:
- 2016
- Book title:
- 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
- Pages:
- 6440-6444
- Month:
- 3
- ISBN:
- 978-1-4799-9988-0
- DOI:
- 10.1109/ICASSP.2016.7472917