Cray Distinguished Speaker: On Leaky Models and Unintended Inferences
Here’s the slides from my Cray Distinguished Speaker talk on On Leaky Models and Unintended Inferences: [PDF] The chatGPT limerick version of my talk abstract is much better than mine: A machine learning model, oh so grand With data sets that it held in its hand It performed quite well But secrets to tell And an adversary’s tricks it could not withstand. Thanks to Stephen McCamant and Kangjie Lu for hosting my visit, and everyone at University of Minnesota.Balancing Tradeoffs between Fickleness and Obstinacy in NLP Models
Post by Hannah Chen. Our work on balanced adversarial training looks at how to train models that are robust to two different types of adversarial examples: Hannah Chen, Yangfeng Ji, David Evans. Balanced Adversarial Training: Balancing Tradeoffs between Fickleness and Obstinacy in NLP Models. In The 2022 Conference on Empirical Methods in Natural Language Processing (EMNLP), Abu Dhabi, 7-11 December 2022. [ArXiv] Adversarial Examples At the broadest level, an adversarial example is an input crafted intentionally to confuse a model.BIML: What Machine Learnt Models Reveal
I gave a talk in the Berryville Institute of Machine Learning in the Barn series on What Machine Learnt Models Reveal, which is now available as an edited video: David Evans, a professor of computer science researching security and privacy at the University of Virginia, talks about data leakage risk in ML systems and different approaches used to attack and secure models and datasets. Juxtaposing adversarial risks that target records and those aimed at attributes, David shows that differential privacy cannot capture all inference risks, and calls for more research based on privacy experiments aimed at both datasets and distributions.Microsoft Research Summit: Surprising (and unsurprising) Inference Risks in Machine Learning
Here are the slides for my talk at the Practical and Theoretical Privacy of Machine Learning Training Pipelines Workshop at the Microsoft Research Summit (21 October 2021): Surprising (and Unsurprising) Inference Risks in Machine Learning [PDF] The work by Bargav Jayaraman (with Katherine Knipmeyer, Lingxiao Wang, and Quanquan Gu) that I talked about on improving membership inference attacks is described in more details here: Bargav Jayaraman, Lingxiao Wang, Katherine Knipmeyer, Quanquan Gu, David Evans.UVA News Article
UVA News has an article by Audra Book on our research on security and privacy of machine learning (with some very nice quotes from several students in the group, and me saying something positive about the NSA!): Computer science professor David Evans and his team conduct experiments to understand security and privacy risks associated with machine learning, 8 September 2021. David Evans, professor of computer science in the University of Virginia School of Engineering and Applied Science, is leading research to understand how machine learning models can be compromised.ICLR DPML 2021: Inference Risks for Machine Learning
I gave an invited talk at the Distributed and Private Machine Learning (DPML) workshop at ICLR 2021 on Inference Risks for Machine Learning.
The talk mostly covers work by Bargav Jayaraman on evaluating privacy in machine learning and connecting attribute inference and imputation, and recent work by Anshuman Suri on property inference.
Codaspy 2021 Keynote: When Models Learn Too Much
Here are the slides for my talk at the 11th ACM Conference on Data and Application Security and Privacy: When Models Learn Too Much [PDF] The talk includes Bargav Jayaraman’s work (with Katherine Knipmeyer, Lingxiao Wang, and Quanquan Gu) on evaluating privacy in machine learning, as well as more recent work by Anshuman Suri on property inference attacks, and Bargav on attribute inference and imputation: Merlin, Morgan, and the Importance of Thresholds and Priors Evaluating Differentially Private Machine Learning in Practice “When models learn too much.CrySP Talk: When Models Learn Too Much
I gave a talk on When Models Learn Too Much at the University of Waterloo (virtually) in the CrySP Speaker Series on Privacy (29 March 2021): Abstract Statistical machine learning uses training data to produce models that capture patterns in that data. When models are trained on private data, such as medical records or personal emails, there is a risk that those models not only learn the hoped-for patterns, but will also learn and expose sensitive information about their training data.Virginia Consumer Data Protection Act
Josephine Lamp presented on the new data privacy law that is pending in Virginia (it still needs a few steps including expected signing by governor, but likely to go into effect Jan 1, 2023): Slides (PDF)
This article provides a summary of the law: Virginia Passes Consumer Privacy Law; Other States May Follow, National Law Review, 17 February 2021.
The law itself is here: SB 1392: Consumer Data Protection Act
Microsoft Security Data Science Colloquium: Inference Privacy in Theory and Practice
Here are the slides for my talk at the Microsoft Security Data Science Colloquium:
When Models Learn Too Much: Inference Privacy in Theory and Practice [PDF]
The talk is mostly about Bargav Jayaraman’s work (with Katherine Knipmeyer, Lingxiao Wang, and Quanquan Gu) on evaluating privacy: