Recent technological advances rely on accurate decision support systems that have been constructed as black boxes. That is, the system’s internal logic is not available to the user, either for financial reasons or due to the complexity of system. This lack of explanation can lead to technical, ethical, and legal issues. For example, if the control module of a self-driving car failed at detecting a pedestrian, it becomes crucial to know why the system erred. In some other cases, the decision system may reflect unacceptable biases that can generate distrust. Recently, the European Parliament adopted the General Data Protection Regulation (GDPR), a law that for the first time in history stipulates the right for all individuals to obtain comprehensible explanations of the logic involved when automated decision making takes place. For all these reasons, multiple research approaches provide comprehensible explanations for traditionally accurate but black-box-like machine learning algorithms such as neural networks or random forests. AIMLAI aims at gathering researchers, experts and professionals interested in the topic of interpretable ML and interpretable AI.

The AIMLAI workshop is co-located with the conference EGC (Extraction et Gestion des Connaissances), which will take place in Metz (France) from the 21st to 25th of January 2019 . AIMLAI envisions to become a discussion venue for the advent of novel interpretable algorithms and explainability modules that mediate the communication between complex ML/AI systems and users.

Research Topics

  • Interpretable classifiers and regressors
  • Generating explanations for ML and AI algorithms
  • AI Ethics
  • Explainable Recommendation Models

Comments are closed.