Atelier MACLEAN @4th MADICS Symposium – Lyon 11-12/07/2022

Event website: https://www.madics.fr/event/symposium-madics-4/

The Maclean session will take place on Tuesday 12/07/2022 in the morning, from 10:15AM to 12:15AM. We are pleased to annonce two keynote speakers:

  • Sophie Giffard-Roisin, IRD researcher at ISTerre Lab, Grenoble
    • Website: https://sophiegif.github.io/
    • Talk: Applications of deep learning in natural hazards and solid earth science using remote sensing
    • Abstract: Solid earth has recently (finally!) gained benefit from the recent advances of machine learning such as in automatic detection of earthquake events in seismic recordings. Moreover, the use of remote sensing data (such as satellite optical and radar) as well as seismology and geodesy, have been important in solid earth sciences in the last decades, for example for estimating the ground motion during an earthquake, for estimating the ground state change after a volcanic eruption or landslide, or even for mapping active faults. This talk will focus on some new research works which aim to develop new data-based methods in order to improve, automatize and create new techniques for solid earth applications based on remote sensing. In particular, the goal is to see the variety of applied problems and how to develop a specific solution for each.
  • Alexandre Benoit, Professor at Polytech Annecy Chambéry, LISTIC
    • Website: https://sites.google.com/site/benoitalexandrevision/
    • Talk: Explainable AI for Earth observation
    • Abstract: Earth Observation (EO), as for other domains, is subject to impressive advances thanks to the availability of abundant data and modern AI methods and more specifically deep neural networks. However, most of the available EO data is generally unlabelled, generally illustrates very local context with specific orientation, climate… such that the generalization behaviours of machine learning models can be limited. In addition, the implication of model inference applied to EO may lead to costly decisions (infrastructure design, modification, agricultural spreading…) such that automatic decisions should be justified or explained. In the era of deep learning-based models, opening those black boxes is a challenge in itself. In this presentation, we will present a variety of activities related to EO at LISTIC Lab with classical and AI-based models. This will lead to a focus on contributions related to explainable AI relying on 3 complementary directions : black box explanation, explanation by model design and redescription mining.

Comments are closed.