Four new Associate Teams and one renewed Team with California Universities

PARIETAL Team © Inria / Photo Kaksonen

PARIETAL Team
© Inria / Photo Kaksonen

Inria is glad to announce the selection of 4 new Associate Teams with California universities (University of California Berkeley, Stanford University, University of Southern California and University of California Los Angeles) as part of the 2016 Inria Associated Team call. The four teams are created in 2016 for 3 years.

In addition, one team that was created in 2013 (DALHIS) has been renewed for 3 years.

The Inria@SiliconValley program counts 15 ongoing Associated Teams in 2016. See the Research Teams page for the full list.

4 new Associate Teams selected in 2016:

  • DECibel – “Discover, Express, Create – Interaction Technologies For Creative Collaboration”: The DECibel associated team involves researchers of Inria ExSitu and of the CITRIS Connected Communities Initiative (CCI) at UC Berkeley. ExSitu explores extreme interaction, working with creative professionals and scientists who push the limits of technology to develop novel interactive technologies that offer new strategies for creative exploration. ExSitu’s research activities include: developing underlying theory (co-adaptive instruments and substrates), conducting empirical studies (participatory design with creative professionals), and implementing interactive systems (creativity support tools). The CITRIS Connected Communities Initiative investigates collaborative discovery and design through new technologies that enhance education, creative work, and public engagement. It develops interactive tools, techniques and materials for the rapid design and prototyping of novel interactive products, expertise sharing among designers, and citizen science investigations. DECibel will combine the strengths of these two groups to investigate novel tools and technologies that support Discovery, Expressivity, and Creativity.
  • DIVERSITY – “Measuring and Exploiting Diversity in Low-Power Wireless Networks”: DIVERSITY brings together researchers of the Inria EVA team and of the Autonomous Networks Research Group (ANRG) at University of Southern California (USC). The goal of  DIVERSITY is to develop the networking technology for tomorrow’s Smart Factory. The two teams comes with a perfect complementary background on standardization and experimentation (Inria-EVA) and scheduling techniques (USC-ANRG). The key topic addressed by the associate team will be networking solutions for the Industrial Internet of Things (IIoT), with a particular focus on reliability and determinism.
  • LargeBrainNets – “Characterizing Large-scale Brain Networks in Typical Populations Using Novel Computational Methods for dMRI and fMRI-based Connectivity”: LargeBrainNets is an associate team between researchers of Inria Athena and of the Stanford Cognitive and Systems Neuroscience Laboratory (SCSNL) in the Stanford University School of Medicine. Characterizing the link between human brain’s structure and function in vivo is a highly multidisciplinary task in nature. LargeBrainNets brings together researchers with expertise in cognitive neuroscience, neuroimaging, computer science, and biostatistics –  critical domains for accomplishing this work. In that respect, the involved teams complement each other: The SCSNL is a lead laboratory in cognitive sciences and system neuroscience with a high level of technical knowledge in quantifying  brain function through rs-fMRI; and Inria Athena is a world-renowned neuroimaging lab with a track record of innovative methodologies to characterize brain structure through dMRI.
  • LEGO- “LEarning GOod representations for natural language processing”: LEGO is a joint effort between researchers of Inria Magnet and of the Theoretical and Emprical Data Science (TEDS) research group, a lab head by Prof. Fei Sha within UCLA Center for Machine Learning Research. LEGO  lies in the intersection of Machine Learning and Natural Language Processing (NLP). The primary goal of LEGO is to advance both machine learning and NLP by developing a general and principled framework for representation learning of text data for NLP tasks. The proposed research addresses the crucial challenge: how do we derive representations of words, sentences, or documents whose interdependencies are given as structured objects like graphs, such that they can be e ffectively used for structured prediction tasks in NLP?