Semi-supervised musical instrument recognition


Deprecated: implode(): Passing glue string after array is deprecated. Swap the parameters in /var/www/html/arg/administrator/components/com_jresearch/helpers/publications.php on line 269

Deprecated: implode(): Passing glue string after array is deprecated. Swap the parameters in /var/www/html/arg/administrator/components/com_jresearch/helpers/publications.php on line 269

Deprecated: implode(): Passing glue string after array is deprecated. Swap the parameters in /var/www/html/arg/administrator/components/com_jresearch/helpers/publications.php on line 269
Diment, Aleksandr

Abstract

The application areas of music information retrieval have been gaining popularity over the last decades. Musical instrument recognition is an example of a specific research topic in the field. In this thesis, semi-supervised learning techniques are explored in the context of musical instrument recognition. The conventional approaches employed for musical instrument recognition rely on annotated data, i.e., example recordings of the target instruments with associated information about the target labels in order to perform training. This implies a highly laborious and tedious work of manually annotating the collected training data. The semi-supervised methods enable incorporating additional unannotated data into training. Such data consists of merely the recordings of the instruments and is therefore significantly easier to acquire. Hence, these methods allow keeping the overall development cost at the same level while notably improving the performance of a system. The implemented musical instrument recognition system utilises the mixture model semi-supervised learning scheme in the form of two EM-based algorithms. Furthermore, upgraded versions, namely, the additional labelled data weighting and class-wise retraining, for the improved performance and convergence criteria in terms of the particular classification scenario are proposed. The evaluation is performed on sets consisting of four and ten instruments and yields the overall average recognition accuracy rates of 95.3 and 68.4%, respectively. These correspond to the absolute gains of 6.1 and 9.7% compared to the initial, purely supervised cases. Additional experiments are conducted in terms of the effects of the proposed modifications, as well as the investigation of the optimal relative labelled dataset size. In general, the obtained performance improvement is quite noteworthy, and future research directions suggest to subsequently investigate the behaviour of the implemented algorithms along with the proposed and further extended approaches.

Keywords

music information retrieval; musical instrument recognition; pattern recogni- tion; semi-supervised learning

Year:
2013
University:
Tampere University of Technology
Address:
Finland