Volltext-Downloads (blau) und Frontdoor-Views (grau)

Deep and interpretable regression models for ordinal outcomes

  • Outcomes with a natural order commonly occur in prediction problems and often the available input data are a mixture of complex data like images and tabular predictors. Deep Learning (DL) models are state-of-the-art for image classification tasks but frequently treat ordinal outcomes as unordered and lack interpretability. In contrast, classical ordinal regression models consider the outcome’s order and yield interpretable predictor effects but are limited to tabular data. We present ordinal neural network transformation models (ontrams), which unite DL with classical ordinal regression approaches. ontrams are a special case of transformation models and trade off flexibility and interpretability by additively decomposing the transformation function into terms for image and tabular data using jointly trained neural networks. The performance of the most flexible ontram is by definition equivalent to a standard multi-class DL model trained with cross-entropy while being faster in training when facing ordinal outcomes. Lastly, we discuss how to interpret model components for both tabular and image data on two publicly available datasets.

Export metadata

Additional Services

Share in Twitter Search Google Scholar


Author:Lucas Kook, Lisa Herzog, Torsten Hothorn, Oliver Dürr, Beate Sick
Parent Title (English):Pattern Recognition
Volume:Vol. 122
Document Type:Article
Year of Publication:2022
Release Date:2022/08/02
Tag:Deep learning; Interpretability; Distributional regression; Ordinal regression; Transformation models
Article Number:108263
Relevance:Peer reviewed Publikation in Master Journal List
Open Access?:Nein
Licence (English):License LogoLizenzbedingungen Elsevier