David Evans, professor of computer science in the University of Virginia School of Engineering and Applied Science, is leading research to understand how machine learning models can be compromised.
ICLR DPML 2021: Inference Risks for Machine Learning
I gave an invited talk at the Distributed and Private Machine Learning (DPML) workshop at ICLR 2021 on Inference Risks for Machine Learning.
The talk mostly covers work by Bargav Jayaraman on evaluating privacy in machine learning and connecting attribute inference and imputation, and recent work by Anshuman Suri on property inference.
Codaspy 2021 Keynote: When Models Learn Too Much
Here are the slides for my talk at the 11th ACM Conference on Data and Application Security and Privacy:
The talk includes Bargav Jayaraman’s work (with Katherine Knipmeyer, Lingxiao Wang, and Quanquan Gu) on evaluating privacy in machine learning, as well as more recent work by Anshuman Suri on property inference attacks, and Bargav on attribute inference and imputation:
- Merlin, Morgan, and the Importance of Thresholds and Priors
- Evaluating Differentially Private Machine Learning in Practice
“When models learn too much. “ Dr. David Evans @UdacityDave of University of Virginia gave a keynote talk on different inference risks for machine learning models this morning at #codaspy21 pic.twitter.com/KVgFoUA6sa
Merlin, Morgan, and the Importance of Thresholds and Priors
Post by Katherine Knipmeyer
Machine learning poses a substantial risk that adversaries will be able to discover information that the model does not intend to reveal. One set of methods by which consumers can learn this sensitive information, known broadly as membership inference attacks, predicts whether or not a query record belongs to the training set. A basic membership inference attack involves an attacker with a given record and black-box access to a model who tries to determine whether said record was a member of the model’s training set.