Home

AIMLAI at ECML/PKDD 2024

We are pleased to announce that the international workshop and tutorial on Advances in Interpretable Machine Learning and Artificial Intelligence (AIMLAI) will take place at ECML/PKDD 2024, which is from September 9 to September 13, 2024.

About AIMLAI (Workshop + Tutorial)

Recent technological advances rely on accurate decision support systems that can be perceived as black boxes due to their overwhelming complexity. This lack of transparency can lead to technical, ethical, legal, and trust issues. For example, if the control module of a self-driving car fails to detect a pedestrian, it becomes crucial to know why the system erred. In other cases, the decision system may reflect unacceptable biases that generate distrust. The General Data Protection Regulation (GDPR), approved by the European Parliament in 2018, suggests that individuals should be able to obtain explanations of the automated decisions made from their data, and to challenge those decisions. All these reasons have propelled research in interpretable and explainable AI/ML. The AIMLAI workshop aims at gathering researchers, experts and professionals, from inside and outside the domain of AI, interested in the topic of interpretable or explainable AI. The workshop encourages interdisciplinary collaborations, with particular emphasis in knowledge management, infovis, human computer interaction and psychology. It also welcomes applied research for use cases where interpretability matters.

AIMLAI envisions to become a discussion venue for the advent of novel interpretable algorithms and explainability modules that mediate the communication between complex ML/AI systems and users.

Besides, the current edition of AIMLAI presents a focused tutorial on explainable models on sequential data. This includes ML on time series, event sequences, DNA sequences, and large language models (LLMs).

Research Topics

Besides the central topic of interpretable algorithms, AIMLAI also puts emphasis on the methodological aspects to measure and evaluate interpretability in ML models and AI algorithms from a user perspective. We provide a non-exhaustive list of topics in the following:

  • Interpretability and Explanations in ML
    • Supervised and Unsupervised ML
    • Explaining recommendation models
    • Multimodal explanations
    • Explainability for large language models (LLMs)
    • Mechanistic Interpretability
  • Transparency in AI and ML
    • Ethical aspects
    • Legal aspects
    • Fairness issues
  • Methodology and formalization of interpretability
    • Formal measures of interpretability
    • Interpretability/complexity trade-offs
    • Methodological guidelines to evaluate interpretability
  • User-centric interpretability
    • Semantic interpretability: how to add semantics to explanations
    • Human-in-the-loop to construct and/or evaluate interpretable models
    • Combining of ML models with infovis and man-machine interfaces

While interpretability and explanations for classical supervised learning models are always welcome, ideas on techniques and definitions/formalizations of these concepts in unsupervised learning are particularly welcome.

The accepted papers will be included in a joint Post-Workshop proceeding (Machine Learning and Principles and Practice of Knowledge Discovery in Databases) published by Springer, in 1-2 volumes, organized by focused scope and possibly indexed by WOS. Papers authors will have the possibility to opt-in or opt-out.

Comments are closed.