Attribute Inference attacks are really Imputation
Post by Bargav Jayaraman Attribute inference attacks have been shown by prior works to pose privacy threat against ML models. However, these works assume the knowledge of the training distribution and we show that in such cases these attacks do no better than a data imputataion attack that does not have access to the model. We explore the attribute inference risks in the cases where the adversary has limited or no prior knowledge of the training distribution and show that our white-box attribute inference attack (that uses neuron activations to infer the unknown sensitive attribute) surpasses imputation in these data constrained cases.Congratulations, Dr. Jayaraman!
Congratulations to Bargav Jayaraman for successfully defending his PhD thesis! Dr. Jayaraman and his PhD committee: Mohammad Mahmoody, Quanquan Gu (UCLA Department of Computer Science, on screen), Yanjun Qi (Committee Chair, on screen), Denis Nekipelov (Department of Economics, on screen), and David Evans Bargav will join the Meta AI Lab in Menlo Park, CA as a post-doctoral researcher. Analyzing the Leaky Cauldron: Inference Attacks on Machine Learning Machine learning models have been shown to leak sensitive information about their training data.Microsoft Research Summit: Surprising (and unsurprising) Inference Risks in Machine Learning
Here are the slides for my talk at the Practical and Theoretical Privacy of Machine Learning Training Pipelines Workshop at the Microsoft Research Summit (21 October 2021): Surprising (and Unsurprising) Inference Risks in Machine Learning [PDF] The work by Bargav Jayaraman (with Katherine Knipmeyer, Lingxiao Wang, and Quanquan Gu) that I talked about on improving membership inference attacks is described in more details here: Bargav Jayaraman, Lingxiao Wang, Katherine Knipmeyer, Quanquan Gu, David Evans.UVA News Article
UVA News has an article by Audra Book on our research on security and privacy of machine learning (with some very nice quotes from several students in the group, and me saying something positive about the NSA!): Computer science professor David Evans and his team conduct experiments to understand security and privacy risks associated with machine learning, 8 September 2021. David Evans, professor of computer science in the University of Virginia School of Engineering and Applied Science, is leading research to understand how machine learning models can be compromised.ICLR DPML 2021: Inference Risks for Machine Learning
I gave an invited talk at the Distributed and Private Machine Learning (DPML) workshop at ICLR 2021 on Inference Risks for Machine Learning.
The talk mostly covers work by Bargav Jayaraman on evaluating privacy in machine learning and connecting attribute inference and imputation, and recent work by Anshuman Suri on property inference.
Codaspy 2021 Keynote: When Models Learn Too Much
Here are the slides for my talk at the 11th ACM Conference on Data and Application Security and Privacy: When Models Learn Too Much [PDF] The talk includes Bargav Jayaraman’s work (with Katherine Knipmeyer, Lingxiao Wang, and Quanquan Gu) on evaluating privacy in machine learning, as well as more recent work by Anshuman Suri on property inference attacks, and Bargav on attribute inference and imputation: Merlin, Morgan, and the Importance of Thresholds and Priors Evaluating Differentially Private Machine Learning in Practice “When models learn too much.Microsoft Security Data Science Colloquium: Inference Privacy in Theory and Practice
Here are the slides for my talk at the Microsoft Security Data Science Colloquium:
When Models Learn Too Much: Inference Privacy in Theory and Practice [PDF]
The talk is mostly about Bargav Jayaraman’s work (with Katherine Knipmeyer, Lingxiao Wang, and Quanquan Gu) on evaluating privacy:
- Merlin, Morgan, and the Importance of Thresholds and Priors
- Evaluating Differentially Private Machine Learning in Practice
Merlin, Morgan, and the Importance of Thresholds and Priors
Post by Katherine Knipmeyer Machine learning poses a substantial risk that adversaries will be able to discover information that the model does not intend to reveal. One set of methods by which consumers can learn this sensitive information, known broadly as membership inference attacks, predicts whether or not a query record belongs to the training set. A basic membership inference attack involves an attacker with a given record and black-box access to a model who tries to determine whether said record was a member of the model’s training set.Research Symposium Posters
Five students from our group presented posters at the department’s Fall Research Symposium:
Anshuman Suri's Overview Talk