See pad for details.
13h45-14h15 Welcome and general news about the project
- new intern (LACODAM/MULTISPEECH): M. Christian Bile until September 2021.
- new ARP INRIA in TAU: M. LEITE Alessandro working on the European FET project TRUST-AI and 2 interns he is supervising: Alex Westbrook and Mathurin Videau
- Thomas Guyet (from LACODAM) is working with a new PHD student with Orange,Victor Guyomard (started in October 2020) on counterfactual explanations with prototypes: challenge 2
14h15-15h10: Georgios Zervakis “On Refining BERT Contextualized Embeddings using Semantic Lexicons” Challenge 1. SLIDES
15h10-15h50: work of Mohit Mittal “Inspecting and Debugging a VQA Systems via explanations” (presented by Luis Galarraga) Challenge 3 SLIDES
16h15 – 17h: Jan Ramon “Interpretable Privacy” (work with Moitree Basu)
Challenge 1 of HyAIAI (how to impose constraints with declarative constraints to a numerical model) SLIDES
17h00- 17h45: Alessandro LEITE reading group “XAI: “eXplainable Artificial Intelligence: a literature review” slides_explainable_ai_review
For NEXT TIME:
Victor Guyomard Reading group: [Prototype Explanations] Chaofan Chen, Oscar Li, Daniel Tao, Alina Barnett, Cynthia Rudin, Jonathan Su: This Looks Like That: Deep Learning for Interpretable Image Recognition. NeurIPS 2019: 8928-8939
or the AAAI paper about proptype explanations.
Oscar Li, Hao Liu, Chaofan Chen, and Cynthia Rudin. Deep learning for case-based reasoning through prototypes:
A neural network that explains its predictions. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 32, 2018.