Capturing suprasegmental features of a voicewith RNNs for improved speaker clustering
- Deep neural networks have become a veritable alternative to classic speaker recognition and clustering methods in recent years. However, while the speech signal clearly is a time series, and despite the body of literature on the benefits of prosodic (suprasegmental) features, identifying voices has usually not been approached with sequence learning methods. Only recently has a recurrent neural network (RNN) been successfully applied to this task, while the use of convolutional neural networks (CNNs) (that are not able to capture arbitrary time dependencies, unlike RNNs) still prevails. In this paper, we show the effectiveness of RNNs for speaker recognition by improving state of the art speaker clustering performance and robustness on the classic TIMIT benchmark. We provide arguments why RNNs are superior by experimentally showing a “sweet spot” of the segment length for successfully capturing prosodic information that has been theoretically predicted in previous work.
Author: | Thilo Stadelmann, Sebastian Glinski-Haefeli, Patrick Gerber, Oliver DürrORCiDGND |
---|---|
URN: | urn:nbn:de:bsz:kon4-opus4-22870 |
DOI: | https://doi.org/10.1007/978-3-319-99978-4_26 |
ISBN: | 978-3-319-99978-4 |
ISBN: | 978-3-319-99977-7 |
Parent Title (English): | 8th IAPR TC3 Workshop on Artificial Neural Networks in Pattern Recognition (ANNPR), 19-21 September 2018, Siena, Italy |
Publisher: | Springer |
Place of publication: | Cham |
Document Type: | Conference Proceeding |
Language: | English |
Year of Publication: | 2018 |
Release Date: | 2020/01/21 |
Tag: | Speaker clustering; Speaker recognition; Recurrent neural network |
Edition: | Akzeptierte Version |
First Page: | 333 |
Last Page: | 345 |
Institutes: | Institut für Optische Systeme - IOS |
DDC functional group: | 006 Spezielle Computerverfahren |
Open Access?: | Ja |
Licence (German): | ![]() |