Machine Learning Privacy

Our research focuses on understanding and mitigating privacy risks associated with machine learning. This includes both data privacy (protecting sensitive data used to train a model during the collection and learning process) and inference privacy (limiting what can be inferred about sensitive training data from an exposed model).

Inference Privacy

These blog posts summarize our recent work on evaluating inference leakage from models:

Publications

Are Attribute Inference Attacks Just Imputation?. Bargav Jayaraman and David Evans. In 29th ACM Conference on Computer and Communications Security (CCS). November 2022. [Arxiv] [Code]

Formalizing and Estimating Distribution Inference Risks . Anshuman Suri and David Evans. In Privacy Enhancing Technologies Symposium (PETS). July 2022. (Also published in Proceedings on Privacy Enhancing Technologies, Issue 4, 2022.) [Arxiv] [Code]

Revisiting Membership Inference Under Realistic Assumptions. Bargav Jayaraman, Lingxiao Wang, Katherine Knipmeyer, Quanquan Gu, and David Evans. In Proceedings on Privacy Enhancing Technologies (PETS). July 2021. [Arxiv] [PDF] [Code]

Evaluating Differentially Private Machine Learning in Practice. Bargav Jayaraman and David Evans. In 28th USENIX Security Symposium. Santa Clara. August 2019. [PDF] [arXiv] [code]

Privacy-Preserving Machine Learning

Distributed learning (sometimes marketed as federated learning) allows a group of independent data owners to collaboratively learn a model over their data sets without exposing their private data.

Projects

Integrating Multi-Party Computation with Differential Privacy (Code, NeurIPS 2018 Paper
Bargav Jayaraman, Lingxiao Wang, Quanquan Gu

Privacy-preserving Medical Decision Systems
Josephine Lamp and Lu Feng

Privacy-Preserving Nonconvex Optimization [Preprint]
Lingxiao Wang, Bargav Jayaraman, Quanquan Gu

Privacy Study Group

Meetings on Tuesdays, 11am (Summer 2020)

Leader: Bargav Jayaraman

DateTopic/Talk
26 May 2020 Cynthia Dwork, Privacy-Preserving Data Analysis (Alan Turing Institute, Turing Lecture, Dec 2016)
2 June 2020 Guy Rothblum, Composition: The Key to Differential Privacy is Success (Institute for Advanced Study, Nov 2016)
9 June 2020

Yuxin Wang, Detecting Violations of Differential Privacy (ACM CCS 2018)

Benjamin Bichsel, DP-Finder: Finding Differential Privacy Violations by Sampling and Optimization (ACM CCS 2018)

15 June 2020 (Joint with Adversarial Machine Learning Study Group)

Mathias Lécuyer, Certified Robustness to Adversarial Examples with Differential Privacy (IEEE Security and Privacy 2019)

Liwei Song, Membership Inference Attacks against Adversarially Robust Deep Learning Models (IEEE Security and Privacy 2019)

23 June 2020

Reza Shokri, Membership Inference Attacks against Machine Learning Models (IEEE Security and Privacy 2017)

Nicholas Carlini, The Secret Sharer: Evaluating and Testing Unintended Memorization in Neural Networks (USENIX Security 2019)

30 June 2020

Karan Ganju, Property Inference Attacks on Fully Connected Neural Networks (ACM CCS 2019)

Binghui Wang, Stealing Hyperparameters in Machine Learning (IEEE Security and Privacy 2018)

7 July 2020

Joe Near, Differential Privacy at Scale: Uber and Berkeley Collaboration (USENIX Enigma 2020) (also see Frank McSherry’s github issue and blog post)

Brendan McMahan, Guarding user Privacy with Federated Learning and Differential Privacy (DIMACS, 2017)

14 July 2020

Ilya Mironov, Rényi Differential Privacy (DIMACS, 2017)