BIS 2015 – Day 2 – Parallel Working Sessions on Neuroimaging, Scientific Computing, and Smart City – May 13th, 2015


08:30: Registration

09.00-12.00: Parallel working sessions with coffee break at 10.30am

12.00-02.00: Lunch Break in SDH-250

02.00-05.00: Parallel working sessions continue with coffee break at 03.30pm

Parallel Sessions

Brainhack on Open-source Python Code Development for Neuroimaging

Meeting room 240, CITRIS-SDH Building

Session Chairs

Bertrand Thirion, Loic Esteve, Inria; Jean-Baptiste Poline, UC Berkeley

Participating teams



The development and quality of neuroscience research depends on the quality of analysis tools and thus on the quality of data analysis software packages. In a neuroimaging laboratory, 80% of a PhD student or post-doc work is spent analyzing data using various software packages. Improving tools is a major challenge in this field. With the size of available data increasing, the future of this field depends more and more on the quality of software packages and their ability to scale.

Neuroimaging is one of the main way of accessing the organisation of the human brain. This field comprises diverse functional imaging modalities, e.g. functional MRI (fMRI), electroencephalography (EEG) or magnetoencephalography (MEG), as well as anatomic ones, e.g. T1 and diffusion weighted imaging. In the past few years, there has been some convergence around the Python language in the neuroimaging community and Python packages have been developed by many different laboratories. More generally, the international Python scientific community is growing fast. The goal of this workshop is to gather developers working at Inria, Stanford or Berkeley on various neuroimaging python projects (Nipy, nipype, nilearn, dipy, PySurfer, etc) , in order to foster and coordinate the development of these projects as well as discuss best practices.


See the dedicated wiki page at:

Back to top

Working session on Scientific and Large Scale Computing

Meeting room 242, CITRIS-SDH Building

Session Chairs

Laura Grigori, Inria; Eric Darve, Stanford University

Participating teams



This session will gather Inria and Bay Area researchers working on different aspects of data science and high performance computing. Speakers will present novel results covering aspects from numerical and randomized linear algebra, parallel and heterogeneous computing, big data analysis, and uncertainty quantification..


See the dedicated web page at:
Fast solvers and randomized linear algebra

9:00 am – 12:00 pm, with cofee break: 10:30 am – 11:00 am

  • Ming Gu, UC Berkeley: Spectrum-revealing Matrix Factorizations  – abstract
  • M. Mahoney, UC Berkeley: Randomized Numerical Linear Algebra – abstract
  • Eric Darve, Stanford: O(N) linear solvers for general H^2 matrices – abstract
  • Jack Poulson, Stanford: Revisiting distributed sparse-direct solvers for Interior Point Methods and generalized Least Squares problems –   abstract
  • Laura Grigori, Inria:  Communication avoiding algorithms for computing low rank approximations – abstract

Lunch break : 12:00 pm – 1:30 pm

Accessing and computing with big data

1:30 pm – 3 pm

  • Shivaram Venkataraman, UC Berkeley: High performance linear algebra on Spark – abstract
  • Tristan Allard, Inria:  PINED-RQ: A Differentially Private Index on Encrypted Databases for supporting Range Queries – abstract
    and Maximilien Servajean, Inria: Increasing Coverage in Distributed Search and Recommendation with Profile Diversity – abstract
  • Lavanya Ramakrishnan, LBNL Tigres: User-level Workflow Library for DALHIS – abstract

Coffee break: 3:00 pm – 3:30 pm

Uncertainty quantification

3:30 pm – 5:00 pm

  • Pietro Congedo, Inria: Some recent studies on uncertainty quantification and robust optimization in the AQUARIUS Team – abstract
  • Gianluca Geraci, Stanford University: UQ in particle-laden turbulent flows – Stanford PSAAP II project – abstract
  • Akshay Mittal, Stanford University/Inria: UQ Algorithms for complex multiphysics applications – abstract

Back to top

Working session on Smart City & Mobility

Meeting room 254, CITRIS-SDH Building

Session Chairs

Valerie Issarny, Sara Hachem, Inria@SiliconValley; Alexey Pozdnukhov, UC Berkeley

Participating teams


CityLab@Inria Teams

Other Inria teams: RITS


The smart city vision raises the prospect that cities will become more sustainable environments, ultimately enhancing the citizens’ well being. There is the additional promise of enabling radically new ways of living in, regulating, operating and managing cities, through the increasing active involvement of citizens by ways of crowdsourcing/sensing and social networking.

From the more technical perspective, smart cities are fascinating, yet challenging systems of systems for the digital science and technologies due to the key characteristics of connected cities and especially their scale. Moreover, the vision of what smart cities should be about is evolving at a fast pace in close concert with the latest technology trends and especially mobile social networking, the IoT and open data.

Inria and its partners of the Inria@SiliconValley program have engaged into research collaboration on smart cities and related issue of urban mobility. This session is structured around panels and is the occasion to exchange about ongoing as well as potential future collaborations on the theme.


09:00: Welcome

09:30: Panel – “Sensing & Networking”

10:30: Break

11:00: Keynote – From smart urban systems to smart environments: Wireless in the Woods, Steven Glaser, University of California, Berkeley & REALMS Associate Team

Wireless sensor networks are becoming an integral component of real-time SWE monitoring infrastructure, but formalized methodologies for their design and optimization have not been sufficiently developed. Representative sampling locations and network topologies are optimized heuristically in the field which is resource intensive, lacks repeatability, and lowers the likelihood of deploying resilient networks. These methods do not account for non-uniform correlations of physiographic variables underlying SWE distributions, lack metrics for determining the optimal number of sensor stations at each site, and do not provide performance guarantees for wireless network structures in complex terrain. We show how pattern recognition and optimization algorithms can be combined with remotely sensed data to optimize sensor networks prior to deployment.

A high resolution dataset of independent physiographic variables is extracted from LiDAR and aerial photographs, and combined into a multivariate feature space normalized and scaled by each variable’s correlation with snow depth. Representative sampling regions are determined using expectation maximization of a Gaussian Mixture Model. An information theoretic metric is used to determine the optimal number of sensor stations. Resilient network structures are found by optimizing a non-convex, multi-objective function using a combination of stochastic gradient descent and graph searching algorithms.

This enables prior knowledge of network behavior to inform optimal deployments for new networks. We compare the results of the algorithm to existing sites at the Southern Sierra Critical Zone Observatory (SSCZO) and in the American River basin. Placements determined by the GMM are at least as representative as existing expert-placed networks. To investigate the effect of site properties on the optimal number of sensor stations needed to representatively measure the site, we apply the algorithm at two other CZOs: Jemez River and Boulder Creek. We show that the distribution and relative correlation of independent variables can have a considerable effect on the optimal number of sensor stations, which ranges from twelve to seventeen in this study.

12:00: Lunch Break (SDH-250)

02:00: Panel – “Urban Distributed Systems”

03:30: Coffee break

04:00: Panel – “Urban Mobility”

Back to top