Volltext-Downloads (blau) und Frontdoor-Views (grau)

Know when you don't know : A robust deep learning approach in the presence of unknown phenotypes

  • Deep convolutional neural networks show outstanding performance in image-based phenotype classification given that all existing phenotypes are presented during the training of the network. However, in real-world high-content screening (HCS) experiments, it is often impossible to know all phenotypes in advance. Moreover, novel phenotype discovery itself can be an HCS outcome of interest. This aspect of HCS is not yet covered by classical deep learning approaches. When presenting an image with a novel phenotype to a trained network, it fails to indicate a novelty discovery but assigns the image to a wrong phenotype. To tackle this problem and address the need for novelty detection, we use a recently developed Bayesian approach for deep neural networks called Monte Carlo (MC) dropout to define different uncertainty measures for each phenotype prediction. With real HCS data, we show that these uncertainty measures allow us to identify novel or unclear phenotypes. In addition, we also found that the MC dropout method results in a significant improvement of classification accuracy. The proposed procedure used in our HCS case study can be easily transferred to any existing network architecture and will be beneficial in terms of accuracy and novelty detection.

Export metadata

Additional Services

Search Google Scholar


Author:Oliver DürrORCiDGND, Elvis Murina, Daniel Siegismund, Vasily Tolkachev, Stephan Steigele, Beate SickORCiDGND
Parent Title (English):ASSAY and Drug Development Technologies
Volume:Vol. 16
Document Type:Article
Year of Publication:2018
Release Date:2019/01/15
Tag:Screening; Classification; Deep learning; Imaging
Issue:No. 6
First Page:343
Last Page:349
Institutes:Institut für Optische Systeme - IOS
Open Access?:Ja
Relevance:Peer reviewed Publikation in Master Journal List