Volltext-Downloads (blau) und Frontdoor-Views (grau)

Capturing suprasegmental features of a voicewith RNNs for improved speaker clustering

  • Deep neural networks have become a veritable alternative to classic speaker recognition and clustering methods in recent years. However, while the speech signal clearly is a time series, and despite the body of literature on the benefits of prosodic (suprasegmental) features, identifying voices has usually not been approached with sequence learning methods. Only recently has a recurrent neural network (RNN) been successfully applied to this task, while the use of convolutional neural networks (CNNs) (that are not able to capture arbitrary time dependencies, unlike RNNs) still prevails. In this paper, we show the effectiveness of RNNs for speaker recognition by improving state of the art speaker clustering performance and robustness on the classic TIMIT benchmark. We provide arguments why RNNs are superior by experimentally showing a “sweet spot” of the segment length for successfully capturing prosodic information that has been theoretically predicted in previous work.

Download full text files

Export metadata

Additional Services

Search Google Scholar


Author:Thilo Stadelmann, Sebastian Glinski-Haefeli, Patrick Gerber, Oliver DürrORCiDGND
Parent Title (English):8th IAPR TC3 Workshop on Artificial Neural Networks in Pattern Recognition (ANNPR), 19-21 September 2018, Siena, Italy
Place of publication:Cham
Document Type:Conference Proceeding
Year of Publication:2018
Release Date:2020/01/21
Tag:Speaker clustering; Speaker recognition; Recurrent neural network
Edition:Akzeptierte Version
First Page:333
Last Page:345
Institutes:Institut für Optische Systeme - IOS
DDC functional group:006 Spezielle Computerverfahren
Open Access?:Ja
Licence (German):License LogoUrheberrechtlich geschützt