Machine Learning Privacy

Our research focuses on understanding and mitigating privacy risks associated with machine learning. This includes both data privacy (protecting sensitive data used to train a model during the collection and learning process) and inference privacy (limiting what can be inferred about sensitive training data from an exposed model).

Evaluating Inference Privacy

These two blog posts summarize our recent work on evaluating inference leakage from models:

Code

Privacy-Preserving Machine Learning

Distributed learning (sometimes marketed as federated learning) allows a group of independent data owners to collaboratively learn a model over their data sets without exposing their private data.

Projects

Integrating Multi-Party Computation with Differential Privacy (Code, NeurIPS 2018 Paper
Bargav Jayaraman, Lingxiao Wang, Quanquan Gu

Privacy-preserving Medical Decision Systems
Josephine Lamp and Lu Feng

Privacy-Preserving Nonconvex Optimization [Preprint]
Lingxiao Wang, Bargav Jayaraman, Quanquan Gu

Privacy Study Group

Meetings on Tuesdays, 11am (Summer 2020)

Leader: Bargav Jayaraman

DateTopic/Talk
26 May 2020 Cynthia Dwork, Privacy-Preserving Data Analysis (Alan Turing Institute, Turing Lecture, Dec 2016)
2 June 2020 Guy Rothblum, Composition: The Key to Differential Privacy is Success (Institute for Advanced Study, Nov 2016)
9 June 2020

Yuxin Wang, Detecting Violations of Differential Privacy (ACM CCS 2018)

Benjamin Bichsel, DP-Finder: Finding Differential Privacy Violations by Sampling and Optimization (ACM CCS 2018)

15 June 2020 (Joint with Adversarial Machine Learning Study Group)

Mathias Lécuyer, Certified Robustness to Adversarial Examples with Differential Privacy (IEEE Security and Privacy 2019)

Liwei Song, Membership Inference Attacks against Adversarially Robust Deep Learning Models (IEEE Security and Privacy 2019)

23 June 2020

Reza Shokri, Membership Inference Attacks against Machine Learning Models (IEEE Security and Privacy 2017)

Nicholas Carlini, The Secret Sharer: Evaluating and Testing Unintended Memorization in Neural Networks (USENIX Security 2019)

30 June 2020

Karan Ganju, Property Inference Attacks on Fully Connected Neural Networks (ACM CCS 2019)

Binghui Wang, Stealing Hyperparameters in Machine Learning (IEEE Security and Privacy 2018)

7 July 2020

Joe Near, Differential Privacy at Scale: Uber and Berkeley Collaboration (USENIX Enigma 2020) (also see Frank McSherry’s github issue and blog post)

Brendan McMahan, Guarding user Privacy with Federated Learning and Differential Privacy (DIMACS, 2017)

14 July 2020

Ilya Mironov, Rényi Differential Privacy (DIMACS, 2017)

Join our slack group for announcements about future meetings.