About AIMLAI (Workshop + Tutorial)
Recent technological advances rely on accurate decision support systems that can be perceived as black boxes due to their overwhelming complexity. This lack of transparency can lead to technical, ethical, legal, and trust issues. For example, if the control module of a self-driving car fails to detect a pedestrian, it becomes crucial to know why the system erred. In other cases, the decision system may reflect unacceptable biases that generate distrust. The General Data Protection Regulation (GDPR), approved by the European Parliament in 2018, suggests that individuals should be able to obtain explanations of the automated decisions made from their data, and to challenge those decisions. All these reasons have propelled research in interpretable and explainable AI/ML. The AIMLAI workshop aims at gathering researchers, experts and professionals, from inside and outside the domain of AI, interested in the topic of interpretable or explainable AI. The workshop encourages interdisciplinary collaborations, with particular emphasis in knowledge management, infovis, human computer interaction and psychology. It also welcomes applied research for use cases where interpretability matters.
AIMLAI envisions to become a discussion venue for the advent of novel interpretable algorithms and explainability modules that mediate the communication between complex ML/AI systems and users.
For this sixth edition we have the pleasure to announce Prof. Mihaela van der Schaar as our keynote speaker. Besides, this year’s edition will put particular emphasis on explainable graph-based machine learning (GraphML), and will therefore feature a tutorial on the topic on September 22nd. The tutorial will cover the latest post-hoc explainability techniques on GNNs (Graph Neural Networks) as well as explainability for models based on knowledge graph embeddings.
Besides the central topic of interpretable algorithms, AIMLAI also puts emphasis on the methodological aspects to measure and evaluate interpretability in ML models and AI algorithms from a user perspective. We provide a non-exhaustive list of topics in the following:
- Interpretability and Explanations in ML
- Supervised and Unsupervised ML
- Explaining recommendation models
- Multimodal explanations
- Interpretable/Explainable GraphML
- Transparency in AI and ML
- Ethical aspects
- Legal aspects
- Fairness issues
- Methodology and formalization of interpretability
- Formal measures of interpretability
- Interpretability/complexity trade-offs
- Methodological guidelines to evaluate interpretability
- User-centric interpretability
- Semantic interpretability: how to add semantics to explanations
- Human-in-the-loop to construct and/or evaluate interpretable models
- Combining of ML models with infovis and man-machine interfaces
While interpretability and explanations for classical supervised learning models are always welcome, ideas on techniques and definitions/formalizations of these concepts in unsupervised learning are particularly welcome.
The accepted papers will be included in a joint Post-Workshop proceeding (Machine Learning and Principles and Practice of Knowledge Discovery in Databases) published by Springer, in 1-2 volumes, organized by focused scope and possibly indexed by WOS. Papers authors will have the possibility to opt-in or opt-out.