Home

About AIMLAI

Recent technological advances rely on accurate decision support systems that operate as black boxes. That is, the system’s internal logic is not available to the user, either for financial reasons or due to the complexity of system. This lack of explanation can lead to technical, ethical, legal, and trust issues. For example, if the control module of a self-driving car failed at detecting a pedestrian, it becomes crucial to know why the system erred. In some other cases, the decision system may reflect unacceptable biases that can generate distrust. The General Data Protection Regulation (GDPR), a law recently approved by the European Parliament, suggests in one of its clauses that individuals should be able to obtain explanations of the decisions proposed by automated processing, and to challenge those decisions. For all these reasons, multiple research approaches provide comprehensible explanations for traditionally accurate but black-box-like machine learning algorithms such as neural networks and random forests. AIMLAI (Advances in Interpretable Machine Learning and Artificial Intelligence) aims at gathering researchers, experts and professionals interested in the topic of interpretable ML and interpretable AI.

AIMLAI envisions to become a discussion venue for the advent of novel interpretable algorithms and explainability modules that mediate the communication between complex ML/AI systems and users.

Research Topics

Besides the central topic of interpretable algorithms, this edition of AIMLAI focuses on the methodological aspects for the evaluation of interpretability in ML models and AI algorithms. We particularly welcome submissions that answer three major research questions, namely “how to measure interpretability?’’, “how to evaluate interpretability?’’ and “how to integrate humans in the ML pipeline for interpretability purposes?’’. We provide a non-exhaustive list of topics in the following:

  • Interpretable ML
    • Supervised ML (classifiers, regressors, …)
    • Unsupervised ML (clustering, dimensionality reduction, visualisation, …)
    • Explainable recommendation models
  • Transparency in AI and ML
    • Ethical aspects
    • Legal aspects
    • Fairness issues
  • Methodology and formalization of interpretability
    • Formal measures of interpretability
    • Relation between interpretability and the complexity of models
    • How to evaluate interpretability
  • User-centric interpretability
    • Interpretability modules: generating explanations for ML and AI algorithms
    • Semantic interpretability: how to add semantics to explanations
    • Human-in-the-loop to construct and/or evaluate interpretable models
    • Integration of ML algorithms, infovis and man-machine interfaces

Comments are closed.