Prior work on Alzheimer's Disease (AD) has demonstrated that convolutional neural networks (CNNs) can leverage the high-dimensional image information for diagnosing patients. Beside such data-driven approaches, many established biomarkers exist and are typically represented as tabular data, such as demographics, genetic alterations, or laboratory measurements from cerebrospinal fluid. However, little research has focused on the effective integration of tabular data into existing CNN architectures to improve patient diagnosis. We introduce the Dynamic Affine Feature Map Transform (DAFT), a general-purpose module for CNNs that incites or represses high-level concepts learned from a 3D image by conditioning feature maps of a convolutional layer on both a patient's image and tabular clinical information. This is achieved by using an auxiliary neural network that outputs a scaling factor and offset to dynamically apply an affine transformation to the feature maps of a convolutional layer. In our experiments on AD diagnosis and time-to-dementia prediction, we show that the DAFT is highly effective in combining 3D image and tabular information by achieving a mean balanced accuracy of 0.622 for diagnosis, and mean c-index of 0.748 for time-to-dementia prediction, thus outperforming all baseline methods. Finally, our extensive ablation study and empirical experiments reveal that the performance improvement due to the DAFT is robust with respect to many design choices.
«
Prior work on Alzheimer's Disease (AD) has demonstrated that convolutional neural networks (CNNs) can leverage the high-dimensional image information for diagnosing patients. Beside such data-driven approaches, many established biomarkers exist and are typically represented as tabular data, such as demographics, genetic alterations, or laboratory measurements from cerebrospinal fluid. However, little research has focused on the effective integration of tabular data into existing CNN architecture...
»