Tutorial on Security and Privacy in Machine Learning

Copy of the slides (draft) 

 

Abstract:

There is growing recognition that machine learning exposes new security and privacy issues in software systems. In this tutorial, we first articulate a comprehensive threat model for machine learning, then present an attack against model prediction integrity, and finally discuss a framework for learning privately.

Machine learning models were shown to be vulnerable to adversarial examples–subtly modified malicious inputs crafted to compromise the integrity of their outputs. Furthermore, adversarial examples that affect one model often affect another model, even if the two models have different architectures, so long as both models were trained to perform the same task. An attacker may therefore conduct an attack with very little information about the victim by training their own substitute model to craft adversarial examples, and then transferring them to a victim model. The attacker need not even collect a training set to mount the attack. Indeed, we demonstrate how adversaries may use the victim model as an oracle to label a synthetic training set for the substitute. We conclude this first part of the tutorial by formally showing that there are (possibly unavoidable) tensions between model complexity, accuracy, and resilience that must be calibrated for the environments in which they will be used.

In addition, some machine learning applications involve training data that is sensitive, such as the medical histories of patients in a clinical trial. A model may inadvertently and implicitly store some of its training data; careful analysis of the model may therefore reveal sensitive information. To address this problem, we demonstrate a generally applicable approach to providing strong privacy guarantees for training data. The approach combines, in a black-box fashion, multiple models trained with disjoint datasets, such as records from different subsets of users. Because they rely directly on sensitive data, these models are not published, but instead used as “teachers” for a “student” model. The student learns to predict an output chosen by noisy voting among all of the teachers, and cannot directly access an individual teacher or the underlying data or parameters. The student’s privacy properties can be understood both intuitively (since no single teacher and thus no single dataset dictates the student’s training) and formally, in terms of differential privacy.

Topics include:

  • An introduction to machine learning
  • A taxonomy of threat models for security and privacy in machine learning
  • Attacks using adversarial examples against vision systems, malware detection, and reinforcement learning agents
  • Black-box attacks against machine learning
  • Adversarial example transferability
  • Defending machine learning with adversarial training and defensive distillation
  • Open problems in defenses such as gradient masking
  • No-free lunch theorem for adversarial machine learning
  • Short tutorial on cleverhans (an open-source library for adversarial machine learning)
  • Differential privacy
  • Privacy-preserving machine learning with the PATE framework

Learning objectives:

  • To explain the fundamentals of security and privacy in machine learning
  • To bring the audience up-to-date with the state-of-the-art attack techniques
  • To make the audience aware of the open problems in defense strategies and as a consequence the risks associated with deploying machine learning in security or privacy sensitive settings
  • To prepare the audience to make original contributions in this area

Target audience:

The target audience is people from the security and privacy community who are interested in (a) deploying machine learning to security problems or (b) making machine learning more secure and private.

Bio:

Nicolas Papernot is a PhD student in Computer Science and Engineering working with Dr. Patrick McDaniel at the Pennsylvania State University. His research interests lie at the intersection of computer security, privacy and machine learning. He is supported by a Google PhD Fellowship in Security. He received a best paper award at ICLR 2017. Nicolas is the co-author of cleverhans, an open-source library for benchmarking the vulnerability of machine learning models. In 2016, he received his M.S. in Computer Science and Engineering from the Pennsylvania State University and his M.S. in Engineering Sciences from the Ecole Centrale de Lyon.

Comments are closed.