Modelling Non-stationary Noise with Spectral Factorisation in Automatic Speech Recognition
Abstract
Speech recognition systems intended for everyday use must be able to cope with a large variety of noise types and levels, including highly non-stationary multi-source mixtures. This study applies spectral factorisation algorithms and long temporal context for separating speech and noise from mixed signals. To adapt the system to varying environments, noise models are acquired from the context, or learnt from the mixture itself without prior information. We also propose methods for reducing the size of the bases used for speech and noise modelling by 20-40 times for better practical applicability. We evaluate the performance of the methods both as a standalone classifier and as a signal-enhancing front-end for external recognisers. For the CHiME noisy speech corpus containing non-stationary multi-source household noises at signal-to-noise ratios ranging from +9 to -6 dB, we report average keyword recognition rates up to 87.8% using a single-stream sparse classification algorithm.
Keywordsautomatic speech recognition; noise robustness; non-stationary noise; non-negative spectral factorisation; exemplar-based
Research areas- Year:
- 2013
- Journal:
- Computer Speech & Language
- Volume:
- 27
- Number:
- 3
- Pages:
- 763-779
- Month:
- May