

{"id":82,"date":"2019-10-10T16:40:25","date_gmt":"2019-10-10T14:40:25","guid":{"rendered":"https:\/\/project.inria.fr\/hyaiai\/?page_id=82"},"modified":"2021-04-05T18:03:34","modified_gmt":"2021-04-05T16:03:34","slug":"related-links","status":"publish","type":"page","link":"https:\/\/project.inria.fr\/hyaiai\/related-links\/","title":{"rendered":"Resources"},"content":{"rendered":"<h3>(Free) Tools for Interpretable AI<\/h3>\n<ul>\n<li><a href=\"https:\/\/eli5.readthedocs.io\/en\/latest\/\">ELI5<\/a>: Python package which helps to debug machine learning classifiers and explain their predictions<\/li>\n<li><a href=\"https:\/\/github.com\/interpretml\/interpret\">InterpretML<\/a>: open-source package that incorporates state-of-the-art machine learning interpretability techniques under one roof<\/li>\n<li><a class=\"navbar-brand align-text-middle\" href=\"https:\/\/fat-forensics.org\/#\">FAT Forensics <\/a>:\u00a0Python toolkit for evaluating Fairness, Accountability and Transparency of Artificial Intelligence systems<\/li>\n<li><a href=\"http:\/\/IBM Research Trusted AI\">AIX 360<\/a>: <span class=\"navbar-brand d-flex flex-fill\" aria-label=\"breadcrumb\">IBM Research Trusted AI . Open source toolkit can help you comprehend how machine learning models predict labels by various means throughout the AI application lifecycle<br \/>\nhttps:\/\/arxiv.org\/abs\/1909.03012<br \/>\n<\/span><\/li>\n<li>2 pages with links to different XAI projects:<br \/>\nhttps:\/\/awesomeopensource.com\/projects\/explainable-ai<br \/>\nhttps:\/\/github.com\/jphall663\/awesome-machine-learning-interpretability#python<\/li>\n<\/ul>\n<h3>Key venues in XAI and interesting talks<\/h3>\n<ul>\n<li><a href=\"https:\/\/sites.google.com\/view\/xai2020\/home\">XAI 2020<\/a>: IJCAI workshop on <em>Explainable AI <\/em>(also done at <a href=\"https:\/\/sites.google.com\/view\/xai2019\/home\">IJCAI 2019<\/a>)<em><br \/>\n<\/em><\/li>\n<li><a href=\"https:\/\/kdd.isti.cnr.it\/xkdd2020\/\">XKDD 2020<\/a>: ECML-PKDD workshops on\u00a0<em>eXplainable Knowledge Discovery in Data Mining<\/em><\/li>\n<li><a href=\"https:\/\/project.inria.fr\/aimlai\/\">AIMLAI 2020<\/a> : CIKM workshop on <em>Advances in Interpretable Machine Learning and Artificial Intelligence<strong><br \/>\n<\/strong><\/em><\/li>\n<li><a href=\"https:\/\/sites.google.com\/view\/whi2020\">WHI 2020<\/a>: ICML workshop on <em>Human Interpretability in Machine Learning<\/em> (5th edition, see also <a href=\"https:\/\/sites.google.com\/view\/hill2019\">HILL 2019<\/a>, <a href=\"https:\/\/sites.google.com\/view\/whi2018\">WHI 2018,<\/a> <a href=\"https:\/\/sites.google.com\/view\/whi2017\">WHI 2017<\/a>, WHI 2016).<\/li>\n<li><a href=\"http:\/\/interpretable-ml.org\/icml2020workshop\/\">XXAI 2020<\/a>: ICML workshop on <em>Extending Explainable AI Beyond Deep Models and Classifiers<\/em><\/li>\n<li><a href=\"https:\/\/interpretablevision.github.io\/\">CVPR 2020 Tutorial<\/a> on\u00a0<i data-stringify-type=\"italic\">Interpretable Machine Learning for Computer Vision <\/i>(also done at <a href=\"https:\/\/interpretablevision.github.io\/index_iccv2019.html\">ICCV&#8217;19<\/a>, <a href=\"https:\/\/interpretablevision.github.io\/index_cvpr2018.html\">CVPR&#8217;18<\/a>)<b data-stringify-type=\"bold\"><i data-stringify-type=\"italic\"><br \/>\n<\/i><\/b><\/li>\n<li><a href=\"https:\/\/visxai.io\/\">VISxAI 2020<\/a>: <strong>3rd<\/strong> VIS workshop on <em>Visualization for AI Explainability<\/em><\/li>\n<li><a href=\"https:\/\/kdd.isti.cnr.it\/xkdd2019\/\">XKDD-AI<\/a><a href=\"https:\/\/kdd.isti.cnr.it\/xkdd2019\/\">MLAI 2019<\/a>: ECML-PKDD joint workshops on <em>Interpretable\/Explainable AI<strong><br \/>\n<\/strong><\/em><\/li>\n<\/ul>\n<h3>Key publications in XAI<\/h3>\n<ul>\n<li>[LIME] Marco T\u00falio Ribeiro, Sameer Singh, Carlos Guestrin: &#8220;Why Should I Trust You?&#8221;: Explaining the Predictions of Any Classifier. KDD 2016: pp 1135-1144<\/li>\n<li>[SHAP] Scott M. Lundberg, Su-In Lee: A Unified Approach to Interpreting Model Predictions. NIPS 2017: pp 4768-4777<\/li>\n<li>[GRADCAM] R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, and D. Batra, \u201cGrad-cam: Visual explanations from deep networks via gradient-based localization,\u201d in IEEE International Conference on Computer Vision, ICCV, 2017, pp. 618\u2013626.<\/li>\n<li>[Integrated Gradient] <cite class=\"data\"><span class=\"this-person\">Mukund Sundararajan<\/span>, <a href=\"https:\/\/dblp.org\/pid\/60\/3530.html\"><span title=\"Ankur Taly\">Ankur Taly<\/span><\/a>, <a href=\"https:\/\/dblp.org\/pid\/22\/3967.html\"><span title=\"Qiqi Yan\">Qiqi Yan<\/span><\/a>:<br \/>\n<span class=\"title\">Axiomatic Attribution for Deep Networks.<\/span> <a href=\"https:\/\/dblp.org\/db\/conf\/icml\/icml2017.html#SundararajanTY17\">ICML 2017<\/a>: 3319-3328<\/cite><\/li>\n<li>[ANCHORS] Marco T\u00falio Ribeiro, Sameer Singh, Carlos Guestrin: Anchors: High-Precision Model-Agnostic Explanations. AAAI 2018: pp 1527-1535<\/li>\n<li>[SURVEY] Riccardo Guidotti, Anna Monreale, Franco Turini, Dino Pedreschi, Fosca Giannotti: A Survey Of Methods For Explaining Black Box Models.\u00a0<a class=\"article__tocHeading\" href=\"https:\/\/dl.acm.org\/journal\/csur\">ACM Computing Surveys<\/a><a class=\"article__tocHeading\" href=\"https:\/\/dl.acm.org\/toc\/csur\/2019\/51\/5\">\u00a0Vol. 51, No. 5<\/a> (2018)<cite class=\"data\"><\/cite><\/li>\n<li>[Prototype Explanations] <cite class=\"data\"><span class=\"this-person\">Chaofan Chen<\/span>, <a href=\"https:\/\/dblp.org\/pid\/160\/8481.html\"><span title=\"Oscar Li\">Oscar Li<\/span><\/a>, <a href=\"https:\/\/dblp.org\/pid\/255\/7062.html\"><span title=\"Daniel Tao\">Daniel Tao<\/span><\/a>, <a href=\"https:\/\/dblp.org\/pid\/222\/2593.html\"><span title=\"Alina Barnett\">Alina Barnett<\/span><\/a>, <a href=\"https:\/\/dblp.org\/pid\/62\/6936.html\"><span title=\"Cynthia Rudin\">Cynthia Rudin<\/span><\/a>, <a href=\"https:\/\/dblp.org\/pid\/23\/1700.html\"><span title=\"Jonathan Su\">Jonathan Su<\/span><\/a>: <span class=\"title\">This Looks Like That: Deep Learning for Interpretable Image Recognition.<\/span> <a href=\"https:\/\/dblp.org\/db\/conf\/nips\/nips2019.html#ChenLTBRS19\">NeurIPS 2019<\/a>: 8928-8939<\/cite><\/li>\n<\/ul>\n<p>&nbsp;<\/p>\n<h3>Big projects on explainable\/interpretable AI<\/h3>\n<ul>\n<li><a href=\"https:\/\/www.ida.liu.se\/~frehe08\/tailor2020\/TAILOR_overview.pdf\">TAILOR ICT-48 project<\/a> (Foundations of Trustworthy AI integrating Learning, Optimisation and Reasoning): focus on <a href=\"https:\/\/www.ida.liu.se\/~frehe08\/tailor2020\/TAILOR_2020_wp3_intro.pdf\">WP3 <\/a>about\u00a0<i>Trustworthy AI<\/i><\/li>\n<li><a href=\"https:\/\/www.darpa.mil\/program\/explainable-artificial-intelligence\">DARPA XAI <\/a>(until 2018)<\/li>\n<li><a href=\"https:\/\/xai-project.eu\/\">ERC grant <\/a><span class=\"field-content\">Fosca GIANNOTTI, <\/span>XAI : <span class=\"field-content\">Science and technology for the explanation of AI decision making (<span class=\"field-content more\">2019-10-01 &#8212; 2024-09-30)<\/span><\/span><\/li>\n<\/ul>\n<p>&nbsp;<\/p>\n<h3>Miscellaneous<\/h3>\n<ul>\n<li>A <a href=\"https:\/\/github.com\/pbiecek\/xai_resources\/\">page<\/a> with (good) additional resources about XAI<\/li>\n<\/ul>\n","protected":false},"excerpt":{"rendered":"<p>(Free) Tools for Interpretable AI ELI5: Python package which helps to debug machine learning classifiers and explain their predictions InterpretML: open-source package that incorporates state-of-the-art machine learning interpretability techniques under one roof FAT Forensics :\u00a0Python toolkit for evaluating Fairness, Accountability and Transparency of Artificial Intelligence systems AIX 360: IBM Research\u2026<\/p>\n<p> <a class=\"continue-reading-link\" href=\"https:\/\/project.inria.fr\/hyaiai\/related-links\/\"><span>Continue reading<\/span><i class=\"crycon-right-dir\"><\/i><\/a> <\/p>\n","protected":false},"author":1669,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"footnotes":""},"class_list":["post-82","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/project.inria.fr\/hyaiai\/wp-json\/wp\/v2\/pages\/82","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/project.inria.fr\/hyaiai\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/project.inria.fr\/hyaiai\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/project.inria.fr\/hyaiai\/wp-json\/wp\/v2\/users\/1669"}],"replies":[{"embeddable":true,"href":"https:\/\/project.inria.fr\/hyaiai\/wp-json\/wp\/v2\/comments?post=82"}],"version-history":[{"count":28,"href":"https:\/\/project.inria.fr\/hyaiai\/wp-json\/wp\/v2\/pages\/82\/revisions"}],"predecessor-version":[{"id":294,"href":"https:\/\/project.inria.fr\/hyaiai\/wp-json\/wp\/v2\/pages\/82\/revisions\/294"}],"wp:attachment":[{"href":"https:\/\/project.inria.fr\/hyaiai\/wp-json\/wp\/v2\/media?parent=82"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}