Machine Learning Privacy

Our research focuses on understanding and mitigating privacy risks associated with machine learning. This includes both data privacy (protecting sensitive data used to train a model during the collection and learning process) and inference privacy (limiting what can be inferred about sensitive training data from an exposed model).

Privacy Reading Group

Inference Privacy

These blog posts (in forward chronological order) summarize our recent work on evaluating inference leakage from models:

Publications

SRG papers on privacy-preserving machine learning (in reverse chronological order):

Efficient Privacy-Preserving Stochastic Nonconvex Optimization. Lingxiao Wang, Bargav Jayaraman, David Evans, Quanquan Gu. In 39th Conference on Uncertainty in Artificial Intelligence (UAI). Pittsburgh, PA. July 2023. [Arxiv]

Manipulating Transfer Learning for Property Inference. Yulong Tian, Fnu Suya, Anshuman Suri, Fengyuan Xu, David Evans. In IEEE/CVF Conference on Computer Vision and Pattern Recognition 2023 (CVPR), Vancouver, Canada. June 2023. [Arxiv] [Code]

SoK: Let the Privacy Games Begin! A Unified Treatment of Data Inference Privacy in Machine Learning. Ahmed Salem, Giovanni Cherubin, David Evans, Boris Köpf, Andrew Paverd, Anshuman Suri, Shruti Tople, Santiago Zanella-Begueli. In 44th IEEE Symposium on Security and Privacy (Oakland). May 2023. [Arxiv] [Video]

Dissecting Distribution Inference. Anshuman Suri, Yifu Lu, Yanjin Chen, David Evans. In IEEE Conference on Secure and Trustworthy Machine Learning (SaTML). Raleigh, North Carolina, 8–10 February 2023. [Arxiv] [Blog] [Code]

Are Attribute Inference Attacks Just Imputation?. Bargav Jayaraman and David Evans. In 29th ACM Conference on Computer and Communications Security (CCS). November 2022. [Arxiv] [Code]

Formalizing and Estimating Distribution Inference Risks . Anshuman Suri and David Evans. In Privacy Enhancing Technologies Symposium (PETS). July 2022. (Also published in Proceedings on Privacy Enhancing Technologies, Issue 4, 2022.) [Arxiv] [Code]

Revisiting Membership Inference Under Realistic Assumptions. Bargav Jayaraman, Lingxiao Wang, Katherine Knipmeyer, Quanquan Gu, and David Evans. In Proceedings on Privacy Enhancing Technologies (PETS). July 2021. [Arxiv] [PDF] [Code]

Evaluating Differentially Private Machine Learning in Practice. Bargav Jayaraman and David Evans. In 28th USENIX Security Symposium. Santa Clara. August 2019. [PDF] [arXiv] [code]

Distributed Learning without Distress: Privacy-Preserving Empirical Risk Minimization. Bargav Jayaraman, Lingxiao Wang, Quanquan Gu, and David Evans. In 32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada. May 2018. (Code, Paper)