Home

Welcome

We are happy to announce the third edition of the workshop on Advances in Interpretable Machine Learning and Artificial Intelligence (AIMLAI) that will take place this year at the CIKM conference. CIKM will be held completely online on October 19th – 23rd, 2020.

About AIMLAI

Recent technological advances rely on accurate decision support systems that can be perceived as black boxes due to their overwhelming complexity. This lack of transparency can lead to technical, ethical, legal, and trust issues. For example, if the control module of a self-driving car failed at detecting a pedestrian, it becomes crucial to know why the system erred. In some other cases, the decision system may reflect unacceptable biases that can generate distrust. The General Data Protection Regulation (GDPR), approved by the European Parliament in 2018, suggests that individuals should be able to obtain explanations of the decisions made from their data by automated processing, and to challenge those decisions. All these reasons have given rise to the domain of interpretable AI. AIMLAI aims at gathering researchers, experts and professionals, from inside and outside the domain of AI, interested in the topic of interpretable ML and interpretable AI. The workshop encourages interdisciplinary collaborations, with particular emphasis in knowledge management, infovis, human computer interaction and psychology. It also welcomes applied research for use cases where interpretability matters.

AIMLAI envisions to become a discussion venue for the advent of novel interpretable algorithms and explainability modules that mediate the communication between complex ML/AI systems and users.

Research Topics

Besides the central topic of interpretable algorithms, AIMLAI also puts emphasis on the methodological aspects to measure and evaluate interpretability in ML models and AI algorithms from a user perspective. We provide a non-exhaustive list of topics in the following:

  • Interpretable ML
    • Supervised ML (classifiers, regressors, …)
    • Unsupervised ML (clustering, dimensionality reduction, visualisation, …)
    • Explaining recommendation systems
  • Transparency in AI and ML
    • Ethical aspects
    • Legal aspects
    • Fairness issues
  • Methodology and formalization of interpretability
    • Formal measures of interpretability
    • Interpretability/complexity trade-offs
    • How to evaluate interpretability
  • User-centric interpretability
    • Interpretability modules: generating explanations for ML and AI algorithms
    • Semantic interpretability: how to add semantics to explanations
    • Human-in-the-loop to construct and/or evaluate interpretable models
    • Integration of ML algorithms, infovis and man-machine interfaces

Submissions with an interdisciplinary orientation are particularly welcome, i.e., works at the cross-road between interpretable ML/AI and data & knowledge management, infovis, man-machine interfaces, and psychology. Applied research where interpretability is vital are also of our interest including but not limited to medical applications, decision systems in law and policy-making, and industry 4.0.

Comments are closed.