Recent technological advances rely on accurate decision support systems that can be perceived as black boxes due to their overwhelming complexity. This lack of transparency can lead to technical, ethical, legal, and trust issues. For example, if the control module of a self-driving car failed at detecting a pedestrian, it becomes crucial to know why the system erred. In some other cases, the decision system may reflect unacceptable biases that can generate distrust. The General Data Protection Regulation (GDPR), approved by the European Parliament in 2018, suggests that individuals should be able to obtain explanations of the decisions made from their data by automated processing, and to challenge those decisions. All these reasons have given rise to the domain of interpretable AI. AIMLAI aims at gathering researchers, experts and professionals, from inside and outside the domain of AI, interested in the topic of interpretable ML and interpretable AI. The workshop encourages interdisciplinary collaborations, with particular emphasis in knowledge management, infovis, human computer interaction and psychology. It also welcomes applied research for use cases where interpretability matters.
AIMLAI envisions to become a discussion venue for the advent of novel interpretable algorithms and explainability modules that mediate the communication between complex ML/AI systems and users.
Besides the central topic of interpretable algorithms, AIMLAI also puts emphasis on the methodological aspects to measure and evaluate interpretability in ML models and AI algorithms from a user perspective. We provide a non-exhaustive list of topics in the following:
- Interpretability and Explanations in ML
- Machine learning models that are directly interpretable
- Explanation modules for black-box models (post-hoc interpretability)
- Transparency in AI and ML
- Ethical aspects
- Legal aspects
- Fairness issues
- Methodology and formalization of interpretability
- Formal measures of interpretability
- Interpretability/complexity trade-offs
- Methodological guidelines to evaluate interpretability
- User-centric interpretability
- Semantic interpretability: how to add semantics to explanations
- Human-in-the-loop to construct and/or evaluate interpretable models
- Combining of ML models with infovis and man-machine interfaces
While interpretability and explanations for classical supervised learning models are always welcome, ideas on techniques and definitions/formalizations of these concepts in unsupervised learning are particularly welcome.