Sound Event Detection with Depthwise Separable and Dilated Convolutions

Drossos, Konstantinos; Mimilakis, {Stylianos Ioannis}; Gharib, Shayan; Li, Yanxiong; Virtanen, Tuomas

State-of-the-art sound event detection (SED) methods usually employ a series of convolutional neural networks (CNNs) to extract useful features from the input audio signal, and then recurrent neural networks (RNNs) to model longer temporal context in the extracted features. The number of the channels of the CNNs and size of the weight matrices of the RNNs have a direct effect on the total amount of parameters of the SED method, which is to a couple of millions. Additionally, the usually long sequences that are used as an input to an SED method along with the employment of an RNN, introduce implications like increased training time, difficulty at gradient flow, and impeding the parallelization of the SED method. To tackle all these problems, we propose the replacement of the CNNs with depthwise separable convolutions and the replacement of the RNNs with dilated convolutions. We compare the proposed method to a baseline convolutional neural network on a SED task, and achieve a reduction of the amount of parameters by 85{\%} and average training time per epoch by 78{\%}, and an increase the average frame-wise F1 score and reduction of the average error rate by 4.6{\%} and 3.8{\%}, respectively.


sound event detection

Book title:
IEEE World Congress on Computational Intelligence (WCCI) 2020