Scientific meeting 28/09/2020

Meeting notes: https://pad.inria.fr/p/g.ok3rdLX0dFe05Xs6$np_6Ojzf6LKRAfXYA17

  • 13:40 -14:20: Elisa Fromont: status of the different recruitments of the project and news about the XAI community in general

 

  • 14:30 – 15h10: Miguel Couceiro – Université de Lorraine, CNRS, Inria N.G.E., Loria

 Title. LimeOut: An ensemble approach to improve process fairness

Abstract. In this talk we address the question of « process fairness ».  More precisely, we consider the problem of making  classifiers fairer by reducing their dependence on sensitive features while increasing (or, at  least, maintaining) their accuracy. To achieve both, we draw inspiration from « dropout » techniques in neural based approaches, and propose a  framework that relies on « feature drop-out » to tackle process fairness. We  make use of « LIME Explanations » to assess a classifier’s fairness and  to determine the sensitive features to remove. This  produces a pool of classifiers (through feature dropout) whose ensemble is shown empirically  to be less dependent on sensitive features, and with improved or no impact on accuracy. This basic framework was presented at XKDD 2020 (joint work with Vaishnavi Bhargava, Amedeo Napoli and myself), but we will also  discuss some recent developments with Guilherme Alves.

https://hal.archives-ouvertes.fr/hal-02864059

Video: https://slideslive.com/38933059/limeout-an-ensemble-approach-to-improve-process-fairness

 

  • 15:10 – 15:50: Esteban Marquer, Université de Lorraine, CNRS, Inria N.G.E., Loria (Intern, paid on the project)

      Title. LatticeNN – Deep Learning and Formal Concept Analysis

      Abstract. In recent years there has been an increasing interest in approaches to combine formal knowledge and artificial neural networks (NNs), called neuro-symbolic approaches. Formal concept analysis (FCA) is a powerful formal tool for understanding complex data called formal context (FC). FCA can be used to generate a structured view of the data, typically a hierarchy of formal concepts called concept lattice or an ontology. It can also discover implications between some aspects of the data and  generate explainable formal rules grounded on the data, which can in turn be used to construct decision systems from the data.

      Together with Ajinkya Kulkarni and Miguel Couceiro, we explore ways to solve the scalability problem inherent to FCA with the hope of revealing implicit information not expressed by FCA, by using deep learning to reproduce the processes of FCA. Recently, neural generative models for graphs have achieved great performance on the generation of specific kinds of graphs. Therefore, we explore an approach to reproduce the formal lattice graph of FCA using generative neural models for graphs, and in particular GraphRNN. Additionally, we develop a data agnostic embedding model for formal concepts, Bag of Attributes (BoA).  Relying on the performance of BoA, we develop an approach to generate the intents, a description of the formal concepts. We report experimental results on generated and real-word data to support our conclusions. This research project was financed by the Inria Project Lab HyAIAI.

  • 15:50 – 16h05 BREAK
  • 16h05 – 16h50:  Neetu Kushwaha – Inria Postdoc LACODAM/MULTISPEECH
      Title. You’re About to Make a Huge Mistake! – Finding Faulty Patterns in Neural Networks. (possible submission at IDA 2020) + discussions
   Abstract. Deep Learning models give very impressive results on many applications. However, the reasons why a deep neural network would fail to classify a particular example at test time are usually unclear especially when the network is highly confident about its decision. We investigate whether it is possible to identify groups of neurons of a trained network which could be responsible for most of the networks mistakes. By identifying such ”faulty” neurons, we are able to detect, at test time, wrong network decisions. This strategy paves the way to more elaborated debugging strategies to construct trustworthy networks.
  • 17h00 – 18h00: INVITED Talk  (ECAI 2020): Luc de Raedt – From Probabilistic Logics to Neuro-Symbolic Artificial Intelligence.

Luc De Raedt, Sebastijan Dumancic, Robin Manhaeve, Giuseppe Marra: From Statistical Relational to Neuro-Symbolic Artificial Intelligence. IJCAI 2020: 4943-4950

 Robin Manhaeve, Sebastijan Dumancic, Angelika Kimmig, Thomas Demeester, Luc De Raedt: DeepProbLog: Neural Probabilistic Logic Programming. NeurIPS 2018: 3753-3763

Comments are closed.