Program & Keynotes

Program

Journée 1 (Mercredi 6 novembre)
  • 9h15 – 9h40 Café d’accueil
  • 9h40 – 10h  Introduction  par  Stéphane TANGUY (EDF, Directeur des Systèmes et Technologies de l’Information) et  Jean-Frédéric Gerbeau (Inria, Directeur Général Délégué à la Science)
  • 10h  – 11h  Keynote :  Deep Learning meets Numerical Modeling, Patrick Gallinari, Sorbonne University and Criteo AI Lab (Slides)
  • 11h – 12h30  Présentations (20mn chacune) – Modérateur: Gabriel Antoniu
    • Patrick Valduriez, Inria. Data-intensive science (Slides)
    • Karim Chine, RosettaHUB Ltd. Towards a convergent collaborative platform for HPC, Big Data and AI in the cloud (Slides)
    • Emmanuel Jeannot, Inria. Utilisation des méthodes d’apprentissage pour modéliser les performance d’application HPC. (Slides)
  • 12h30 – 13h45 Pause déjeuner
  • 13h45 – 14h45  Keynote : HPC-Big Data convergence: the BDEC perspective. Mark Asch, Université de Picardie (Slides)
  • 14h45 – 16h15  Présentations (15mn chacune) – Modérateur: Philippe Preux
    • Valerie Gautard, CEA (Slides)
    • Damien Schmitt, EDF R&D, Apprentissage par renforcement pour l’optimisation des plans de chargement des centrales nucléaires (Slides)
    • Bruno Conche, Total (Slides)
    • Teodora Petrisor, Thales, Towards Learning from Server to the Edge via Emerging Nanodevices (Slides)
    • Romain Lerallut, Criteo, AI in practice (Slides)
  • 16h15 – 16h45  Pause café
  • 16h45 – 17h45  Table ronde : L’IA à grande échelle au service de l’industrie
    • Alejandro Ribes EDF R&D, modérateur
    • Damien Schmitt, EDF R&D
    • Bruno Conche, Total
    • Teodora Petrisor, Thales
    • Romain Lerallut, Criteo
    • Mark Asch, Université de Picardie
  • 18h – 20h  Cocktail dinatoire
Journée 2 (Jeudi 7 novembre)

Keynote Speakers

 

Deep Learning meets Numerical Modeling, Patrick Gallinari, Sorbonne University and Criteo AI Lab

Abstract

There is a current trend of research aimed at developing synergies between model based approaches inherited from physics or numerical analysis, and the agnostic AI/ data science paradigm. In this context, connections between Deep Learning models and Differential Equations for modeling dynamical systems have recently motivated several developments that could certainly be beneficial to both communities. Adopting the point of view of Machine Learning, I will give a brief overview of the recent trends on this topic. I will then detail illustrative use cases in the domain of fluid dynamics describing how to incorporate prior physical knowledge in deep learning systems and how to learn dynamical models, and a use case for video prediction.

 

Bio

Patrick Gallinari is a professor at Sorbonne University and researcher at Criteo AI Lab – Paris. His research focuses on statistical learning and deep learning with applications in different fields such as semantic data processing or complex data analysis.  Together with colleagues from Sorbonne University, he started to explore in 2017 the development of physico-statistical systems combining the model based approaches of physics and the data processing approaches of statistical learning. He has been a pioneer in the development of Neural Networks in the 90ies.  He leads a team whose central theme is Statistical Learning and Deep Learning (https://mlia.lip6.fr). He was director of the Paris 6 computer lab (LIP6) for 9 years (2005 to 2013).

 

 

Engineering for the Age of artificial intelligence, Francisco Chinesta, ESI Group chair @ ENSAM ParisTech

Abstract

Artificial Intelligence is having a major impact in all areas of science and technology. When applied to engineering it faces to some specificities, the two most relevant concern, first, the existence of a magnificent physics-based model ground or corpus that proved its efficiency and relevance in the atoning and numerous last century accomplishments. This ground is absent in many disciplines in which AI was identified from the beginning as an appealing route for mitigate the absence of models or the poor generality and accuracy of the existing ones. On the other side, in engineering, and more specifically in industry, data is scarcer than most of the existing AI techniques requires for learning models. Data is expensive to acquire, sometimes data-collection becomes technologically challenging and sometimes data remains simply unattainable. In those circumstances physically-informed learning and machine learning techniques operating at the low-data limit seem compulsory, being a tremendous and urgent need in some engineering applications.

This talk will focus on Engineered-AI techniques, ensuring their transfer to engineering practice, enabling powerful digital twins. For that purpose we revisit the six major topics: (i) visualization of multidimensional data; (ii) classification ; (iii) modelling, that is, extracting the correlations between actions and reactions (input and outputs) in a form of an operative expression able to make accurate predictions in almost real-time (able to proceed with small amounts of data); (iv) certification of those data-driven model, necessary for developing them massively in engineering applications; (v) explaining the learned models, the ultimate goal of intelligence, and the only way of allowing more than simple fittings, enabling  acquiring knowledge making possibly discovery and innovation by extrapolation or inference; and (vi) integrating these technologies at the component and complex system levels, enabling when combined with physics based model, digital twins of materials, processes, structures and systems.

 

Bio

Francisco Chinesta is currently full Professor of computational physics at ENSAM ParisTech (Paris, France). He was (2008-2012) AIRBUS Group chair professor and since 2013 he is ESI Group chair professor on advanced modeling and simulation of materials, structures, processes and systems. He is honorary fellow of the “Institut Universitaire de France”, Fellow of the Spanish Royal Academy of Engineering. He received many scientific awards in four different fields: bio-engineering, material forming processes, rheology and computational mechanics, among them the IACM Zienkiewicz & ESAFORM awards. He is author of 300 papers in peer-reviewed international journals and more than 600 contributions in conferences. He is president of the French association of computational mechanics (CSMA) and director of the CNRS research group (GdR) on model order reduction techniques for engineering sciences. He is editor and associate editor of many journals. He is the president of ESI Group scientific committee and director of ESI Group scientific department. He received in 2018 the Doctorate Honoris Causa at the University of Zaragoza (Spain) and in 2019 the Silver medal from the French CNRS.

HPC-Big Data convergence: the BDEC perspective. Mark Asch, Université de Picardie

Abstract

The convergence of Big Data and HPC has been broadly recognised as being a high priority. Adding to this, is the recent upsurge in data flows streaming in from scientific instruments (from the very large to the very small) and sensors at the edge. This data also needs to be transmitted, analysed, stored, etc. In the BDEC consortium (www.exascale.org/bdec) we have been working on the new concept of a digital continuum that will define international standards, methods and tools for this new paradigm. Of course, machine learning (ML) will be omnipresent, and there is an urgent need for a data logistics network (DLN).

In this talk I will present the work of BDEC and open the floor to a discussion on demonstrators that are our way to prepare software platforms to address the above issues.

Bio

Mark Asch holds a B.S. degree in agronomy, an M.S. degree in applied physics from the Hebrew University of Jerusalem (1984) and M.S. and Ph.D. degrees in mathematics (1990) from the Courant Institute of New York University. After post-doctoral work at the Institute of Advanced Studies, Princeton and INRIA, France, he was appointed assistant professor at the University of Paris XI. In 2001 he was nominated professor of mathematics at the University of Toulon, and since 2005 at the University of Picardy where he was vice-chancellor for research between 2005 and 2008. After 2 years as scientific officer for HPC at CNRS in Paris, and 3 years at the French Ministry of Research as scientific officer for mathematics, computing and e-infrastructures, he was on secondment at the French Research Agency (ANR). Between 2017-2019, he was scientific advisor for data science and AI at Total R&D headquarters in Paris. His research interests are in environmental acoustics, wave propagation, random media, control theory and the application of control methods to inverse problems. He has published over 70 articles and conference proceedings in these domains. His latest book, “Data Assimilation: Methods, Algorithms and Applications”, was published in 2016 by SIAM, USA. He lead an action theme in the Belmont Forum “Data Management and e-Infrastructure” initiative and is currently the European coordinator of the international BDEC (Big Data and Extreme-Scale Computing) forum. Prof. Asch is a member of the Acoustical Society of America (ASA), the IEEE and the Society for Industrial and Applied Mathematics (SIAM).

Running large models in minutes: an engineering journey through high performance for AI. Julie  Bernauer, Director, Deep Learning Systems Engineering, NVIDIA CA

Abstract

HPC and AI are on a collision course with rapid advancements in multiple fields, from climate modeling to drug design. Data from different types of sensors and instruments are driving ever larger and more complex models. Challenging workloads like BERT for natural language processing (NLP) and speech recognition, benchmarks like MLPerf, and keeping competitive with state of the art models and research in AI highlight the importance of being able to quickly train and tune such models. Until recently, system design for AI and HPC were often done in isolation as the requirements of the platforms were viewed as different. Now the largest supercomputers in the world are designed with AI in mind and enterprise and AI research systems are being designed more like supercomputers. Scaling is a core part of modern AI frameworks and methodologies. In this talk we will cover how we think about and design infrastructure that can be leveraged to support the needs of AI research and development teams, and how modern AI frameworks and models are built to leverage these systems.

Bio

Julie Bernauer is Director for Deep Learning Systems Software at NVIDIA Corporation. Her team focuses on several aspects of deep learning systems including performance and large scale deep learning and deployments for hyperscale and cloud services. She joined NVIDIA in 2015 after fifteen years in academia as an expert in machine learning for computational structural biology. She obtained her PhD from Université Paris-Sud studying geometric and statistical models for modelling protein complexes. After a post-doc at Stanford University with Pr. Michael Levitt, Nobel Prize in Chemistry 2013, she joined Inria, the French National Institute for Computer Science. While Senior Research Scientist at Inria, Adjunct Associate Professor of Computer Science at École Polytechnique and Visiting Research Scientist at SLAC, her work focused on computational methods for structural bioinformatics, specifically scoring functions for macromolecule docking using machine learning, and statistical potentials for molecular simulations.

 

 

 

Comments are closed.