Accepted Papers


Alzheimer’s Disease Modelling and Staging through Independent Gaussian Process Analysis of Spatio-Temporal Brain Changes

Clement Abi Nader, Nicholas Ayache, Philippe Robert, and Marco Lorenzi

Alzheimer’s disease (AD) is characterized by complex and largely unknown progression dynamics affecting the brain’s morphology. Although the disease evolution spans decades, to date we cannot rely on long-term data to model the pathological progression, since most of the available measures are on a short-term scale. It is therefore difficult to understand and quantify the temporal progression patterns affecting the brain regions across the AD evolution. In this work, we present a generative model based on probabilistic matrix factorization across temporal and spatial sources. The proposed method addresses the problem of disease progression modeling by introducing clinically-inspired statistical priors. To promote smoothness in time and model plausible pathological evolutions, the temporal sources are defined as monotonic and independent Gaussian Processes. We also estimate an individual timeshift parameter for each patient to automatically position him/her along the sources time-axis. To encode the spatial continuity of the brain substructures, the spatial sources are modeled as Gaussian random fields. We test our algorithm on grey matter maps extracted from brain structural images. The experiments highlight differential temporal progression patterns mapping brain regions key to the AD pathology and reveal a disease-specific timescale associated with the decline of volumetric biomarkers across clinical stages.


Multi-Channel Stochastic Variational Inference for the Joint Analysis of Heterogeneous Biomedical Data in Alzheimer’s Disease

Luigi Antelmi, Nicholas Ayache, Philippe Robert, and Marco Lorenzi

The joint analysis of biomedical data in Alzheimer’s Disease (AD) is important for better clinical diagnosis and to understand the relationship between biomarkers. However, jointly accounting for heterogeneous measures poses important challenges related to the modeling of heterogeneity and to the interpretability of the results. These issues are here addressed by proposing a novel multi-channel stochastic generative model. We assume that a latent variable generates the data observed through different channels (e.g., clinical scores, imaging) and we describe an efficient way to estimate jointly the distribution of the latent variable and the data generative process. Experiments on synthetic data show that the multi-channel formulation allows superior data reconstruction as opposed to the single channel one. Moreover, the derived lower bound of the model evidence represents a promising model selection criterion. Experiments on AD data show that the model parameters can be used for unsupervised patient stratification and for the joint interpretation of the heterogeneous observations. Because of its general and flexible formulation, we believe that the proposed method can find various applications as a general data fusion technique.


Visualizing Convolutional Networks for MRI-based Diagnosis of Alzheimer’s Disease

Johannes Rieke, Fabian Eitel, Martin Weygandt, John-Dylan Haynes, and Kerstin Ritter

Visualizing and interpreting convolutional neural networks (CNNs) is an important task to increase trust in automatic medical decision-making systems. In this study, we train a 3D CNN to detect Alzheimer’s disease based on structural MRI scans of the brain. Then, we apply four different gradient-based and occlusion-based visualization methods that explain the network’s classification decisions by highlighting relevant areas in the input image. We compare the methods qualitatively and quantitatively. We find that all four methods focus on brain regions known to be involved in Alzheimer’s disease, such as inferior and middle temporal gyrus. While the occlusion-based methods focus more on specific regions, the gradient-based methods pick up distributed relevance patterns. Additionally, we find that the distribution of relevance varies across patients, with some having a stronger focus on the temporal lobe, whereas for others more cortical areas are relevant. In summary, we show that applying different visualization methods is important to understand the decisions of a CNN, a step that is crucial to increase clinical impact and trust in computer-based decision support systems.


Finding Effective Ways to (Machine) Learn fMRI-based Classifiers from Multi-Site Data

Roberto Vega and Russ Greiner

Machine learning techniques often require many training instances to find useful patterns, especially when the signal is subtle in high-dimensional data. This is especially true when seeking classifiers of psychiatric disorders, from fMRI (functional magnetic resonance imaging) data. Given the relatively small number of instances available at any single site, many projects try to use data from multiple sites. However, forming a dataset by simply concatenating the data from the various sites, often fails, due to batch effects – that is, the accuracy of a classifier learned from such a multi-site dataset, is often worse than of a classifier learned from a single site. We show why several simple, commonly used, techniques { such as including the site as a covariate, z-score normalization, or whitening { are useful only in very restrictive cases. Additionally, we propose an evaluation methodology to measure the impact of batch effects in classification studies and propose a technique for solving batch effects under the assumption that they are caused by a linear transformation. We empirically show that this approach consistently improves the performance of classifiers in multi-site scenarios, and presents more stability than the other approaches analyzed.


3D multi-scale dense networks for multiple sclerosis classification based on structural MRI data

Fabian Eitel and Kerstin Ritter

Multiple sclerosis (MS) is a severe neurological disease characterized by several pathological processes including inflammation, demyelination, and atrophy. It is the leading cause of severe disability in young adults. Most machine learning studies so far focused only on one aspect of disease manifestation, namely focal lesions visible in structural magnetic resonance imaging (MRI) data. Here, we propose using 3-dimensional multi-scale dense networks (3D-MSDNets) for classifying MS patients and healthy controls involving information from whole-brain MRI data. We show a considerable increase in accuracy in comparison with the lesion volume as a common clinical reference marker. Additionally, we demonstrate superior performance to vanilla convolutional neural networks.