TY - JOUR U1 - Zeitschriftenartikel, wissenschaftlich - begutachtet (reviewed) A1 - Kook, Lucas A1 - Herzog, Lisa A1 - Hothorn, Torsten A1 - Dürr, Oliver A1 - Sick, Beate T1 - Deep and interpretable regression models for ordinal outcomes JF - Pattern Recognition N2 - Outcomes with a natural order commonly occur in prediction problems and often the available input data are a mixture of complex data like images and tabular predictors. Deep Learning (DL) models are state-of-the-art for image classification tasks but frequently treat ordinal outcomes as unordered and lack interpretability. In contrast, classical ordinal regression models consider the outcome’s order and yield interpretable predictor effects but are limited to tabular data. We present ordinal neural network transformation models (ontrams), which unite DL with classical ordinal regression approaches. ontrams are a special case of transformation models and trade off flexibility and interpretability by additively decomposing the transformation function into terms for image and tabular data using jointly trained neural networks. The performance of the most flexible ontram is by definition equivalent to a standard multi-class DL model trained with cross-entropy while being faster in training when facing ordinal outcomes. Lastly, we discuss how to interpret model components for both tabular and image data on two publicly available datasets. KW - Deep learning KW - Interpretability KW - Distributional regression KW - Ordinal regression KW - Transformation models Y1 - 2022 SN - 1873-5142 SS - 1873-5142 U6 - https://doi.org/10.1016/j.patcog.2021.108263 DO - https://doi.org/10.1016/j.patcog.2021.108263 N1 - Corresponding Author: Beate Sick VL - Vol. 122 PB - Elsevier ER -