Monday january 16, 2017
Stanley Durrleman , Inria

Learning digital brain models from multimodal image data

We will present algorithms and methods to automatically build digital brain models from sets of observations. These observations may take the form of series of medical images or 3D geometrical objects extracted from these images, such as surface or curve meshes. One advantage of these methods is that they do not require to establish correspondence across homologous points in data from different individuals. We will show how these methods allow the study of the variability in the relative position of two objects such as the position of white matter fiber tracts relatively to the cortex.

We will present also our contribution in the study of longitudinal data, in which individuals are observed at multiple points in time. Digital models become then dynamic. They summarize the temporal evolution observed in the data, and allow the positioning of each individual with respect to a reference trajectory of changes. Individual parameters account for the variability in the data at a given stage and in the pace of changes. This work has been used for constructing typical scenarios of cognitive decline in the progression of Alzheimer’s disease.

Comments are closed.