All times are given in UTC+2. The program spans two days: Monday morning, September 18th, and Friday morning, September 22nd.
Monday Morning (PoliTo Room 12i):
9:00am – 9:15am: Welcome and introduction
9:15am – 10:30am: Invited talk by Mihaela van der Schaar, Professor of Machine Learning, Artificial Intelligence and Medicine at the University of Cambridge and a Fellow at The Alan Turing Institute in London
Title of the talk: Turning the lights on inside the black box: New frontiers in machine learning interpretability from time-series to causal inference to unsupervised learning
10:30am – 11:00am: First session of presentations
– 10:30am: An Efficient Shapley Value Computation for the Naive Bayes Classifier [Slides] [Paper]
– 10:45am: Predicate-based explanation of a Reinforcement Learning agent via action importance evaluation [Slides] [Paper]
11:00am – 11:30am: Coffee break
11:30am – 12:30pm: Second session of presentations
– 11:30am: Revealing Similar Semantics Inside CNNs: An Interpretable Concept-based Comparison of Feature Spaces [Slides] [Paper]
– 11:45am: An Experimental Investigation into the Evaluation of Explainability Methods for Computer Vision
– 12:00pm: Natively Interpretable t-SNE [Slides] [Paper]
12:15pm: Feedback from workshop participants
Friday Morning (PoliTo Room 8i):
9:00am – 11:00am: Introduction and Tutorial on Explainable Graph ML: Part 1
11:00am – 11:15am: Coffee break
11:15am – 12:00pm: Tutorial: Part 2
12:00pm – 12:15pm: Feedback and discussion
12:15pm – 13:00pm: Third session of presentations
– 12:15pm: On the Adaptability of Attention-Based Interpretability in Different Transformer Architectures for Multi-Class Classification Tasks [Slides] [Paper]
– 12:30pm: Analyzing the Explanation and Interpretation Potential of Matrix Capsule Networks [Slides] [Paper]
– 12:45pm: Local interpretability of random forests for multi-target regression [Slides] [Paper]
13:00pm: Closing