Tutorial on Understanding Deep Neural Networks and their Predictions

Copy of the slides here

Abstract

Machine learning methods such as deep neural networks have achieved spectacular progress on a number of complex problems in image recognition, natural language processing, and scientific domains. Complex nonlinear models, combined with powerful optimization techniques and large compute resources have allowed to automatically extract information from large databases.

The Internet provides vast collections of annotated images or texts, that can be processed by machine learning algorithms. However, in certain applications (e.g. physics, biology, medicine, forensics) it can sometimes be difficult to assemble a sufficient dataset on which the model can be properly trained and validated. As a result, the learning algorithm may learn to use features that do not generalize to new data, even after careful cross-validation.

It is therefore important to go beyond standard data-driven validation techniques, for example, by tapping into human intuition and domain knowledge. Doing so requires techniques that are able to explain, in a reliable and interpretable manner, what these complex models have learned.

The tutorial will be composed of three parts:

  1. Deep neural networks (DNN): The basic concepts of deep neural networks and how to train them will be presented. The learned models will be compared to other machine learning models such as linear or kernel classifiers, and several successful applications of deep networks will be discussed.
  2. Techniques of analysis and interpretation: Different types of DNN interpretations (interpreting modeled classes, explaining predictions) will be presented. The tutorial will then focus on several techniques that produce these interpretations, and how to make them robust and scalable for complex DNN models.
  3. Application to model validation and knowledge extraction: We will present how techniques for interpretability can be used to overcome the limitations of basic approaches to model validation. This will be illustrated with several practical examples. Finally, other examples will be given where DNNs combined with interpretability techniques are bringing new insights into scientific problems.

Bio

Grégoire Montavon received a Masters degree in Communication Systems from École Polytechnique Fédérale de Lausanne in 2009 and a Ph.D. degree in Machine Learning from the Technische Universität Berlin in 2013. He is currently a Research Associate in the Machine Learning Group at TU Berlin. His research interests are neural networks, machine learning and data analysis.

Comments are closed.