Resources

(Free) Tools for Interpretable AI

  • ELI5: Python package which helps to debug machine learning classifiers and explain their predictions
  • InterpretML: open-source package that incorporates state-of-the-art machine learning interpretability techniques under one roof
  • FAT Forensics : Python toolkit for evaluating Fairness, Accountability and Transparency of Artificial Intelligence systems
  • AIX 360: IBM Research Trusted AI . Open source toolkit can help you comprehend how machine learning models predict labels by various means throughout the AI application lifecycle
    https://arxiv.org/abs/1909.03012
  • 2 pages with links to different XAI projects:
    https://awesomeopensource.com/projects/explainable-ai
    https://github.com/jphall663/awesome-machine-learning-interpretability#python

Key venues in XAI and interesting talks

  • XAI 2020: IJCAI workshop on Explainable AI (also done at IJCAI 2019)
  • XKDD 2020: ECML-PKDD workshops on eXplainable Knowledge Discovery in Data Mining
  • AIMLAI 2020 : CIKM workshop on Advances in Interpretable Machine Learning and Artificial Intelligence
  • WHI 2020: ICML workshop on Human Interpretability in Machine Learning (5th edition, see also HILL 2019, WHI 2018, WHI 2017, WHI 2016).
  • XXAI 2020: ICML workshop on Extending Explainable AI Beyond Deep Models and Classifiers
  • CVPR 2020 Tutorial on Interpretable Machine Learning for Computer Vision (also done at ICCV’19, CVPR’18)
  • VISxAI 2020: 3rd VIS workshop on Visualization for AI Explainability
  • XKDD-AIMLAI 2019: ECML-PKDD joint workshops on Interpretable/Explainable AI

Key publications in XAI

  • [LIME] Marco Túlio Ribeiro, Sameer Singh, Carlos Guestrin: “Why Should I Trust You?”: Explaining the Predictions of Any Classifier. KDD 2016: pp 1135-1144
  • [SHAP] Scott M. Lundberg, Su-In Lee: A Unified Approach to Interpreting Model Predictions. NIPS 2017: pp 4768-4777
  • [GRADCAM] R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, and D. Batra, “Grad-cam: Visual explanations from deep networks via gradient-based localization,” in IEEE International Conference on Computer Vision, ICCV, 2017, pp. 618–626.
  • [Integrated Gradient] Mukund Sundararajan, Ankur Taly, Qiqi Yan:
    Axiomatic Attribution for Deep Networks. ICML 2017: 3319-3328
  • [ANCHORS] Marco Túlio Ribeiro, Sameer Singh, Carlos Guestrin: Anchors: High-Precision Model-Agnostic Explanations. AAAI 2018: pp 1527-1535
  • [SURVEY] Riccardo Guidotti, Anna Monreale, Franco Turini, Dino Pedreschi, Fosca Giannotti: A Survey Of Methods For Explaining Black Box Models. ACM Computing Surveys Vol. 51, No. 5 (2018)
  • [Prototype Explanations] Chaofan Chen, Oscar Li, Daniel Tao, Alina Barnett, Cynthia Rudin, Jonathan Su: This Looks Like That: Deep Learning for Interpretable Image Recognition. NeurIPS 2019: 8928-8939

 

Big projects on explainable/interpretable AI

  • TAILOR ICT-48 project (Foundations of Trustworthy AI integrating Learning, Optimisation and Reasoning): focus on WP3 about Trustworthy AI
  • DARPA XAI (until 2018)
  • ERC grant Fosca GIANNOTTI, XAI : Science and technology for the explanation of AI decision making (2019-10-01 — 2024-09-30)

 

Miscellaneous

  • A page with (good) additional resources about XAI

Comments are closed.