Our research seeks to empower individuals and organizations to control
how their data is used. We use techniques from cryptography,
programming languages, machine learning, operating systems, and other
areas to both understand and improve the privacy and security of
computing as practiced today, and as envisioned in the future. A major
current focus is on adversarial machine learning.
Everyone is welcome at our research group meetings. To get
announcements, join our Slack Group (any
@virginia.edu email address can join themsleves, or email me
to request an invitation).
A Chinese translation of our A Pragmatic Introduction to Secure
Multi-Party Computation book (by David
Evans, Vladimir Kolesnikov, and Mike Rosulek) is now available!
Thanks to Weiran Liu and Sengchao Ding for all the work they
did on the translation.
To order from JD.com: https://item.jd.com/13302742.html
(The English version of the book is still available for free download, from https://securecomputation.org.)
I gave an invited talk at the Distributed and Private Machine Learning (DPML) workshop at ICLR 2021 on Inference Risks for Machine Learning.
The talk mostly covers work by Bargav Jayaraman on evaluating privacy in
machine learning and connecting attribute inference and imputation, and recent work by Anshuman Suri on property inference.
Here are the slides for my talk at the
11th ACM Conference on Data and Application Security and Privacy:
When Models Learn Too Much [PDF]
The talk includes Bargav Jayaraman’s work (with Katherine Knipmeyer, Lingxiao Wang, and Quanquan Gu) on evaluating privacy in machine learning, as well as more recent work by Anshuman Suri on property inference attacks, and Bargav on attribute inference and imputation:
I gave a talk on When Models Learn Too Much at the University of Waterloo (virtually) in the CrySP
Speaker Series on Privacy (29 March 2021):
Statistical machine learning uses training data to produce models that
capture patterns in that data. When models are trained on private
data, such as medical records or personal emails, there is a risk that
those models not only learn the hoped-for patterns, but will also
learn and expose sensitive information about their training
data. Several different types of inference attacks on machine learning
models have been found, and methods have been proposed to mitigate the
risks of exposing sensitive aspects of training data. Differential
privacy provides formal guarantees bounding certain types of inference
risk, but, at least with state-of-the-art methods, providing
substantive differential privacy guarantees requires adding so much
noise to the training process for complex models that the resulting
models are useless. Experimental evidence, however, suggests that
inference attacks have limited power, and in many cases a very small
amount of privacy noise seems to be enough to defuse inference
attacks. In this talk, I will give an overview of a variety of
different inference risks for machine learning models, talk about
strategies for evaluating model inference risks, and report on some
experiments by our research group to better understand the power of
inference attacks in more realistic settings, and explore some broader
the connections between privacy, fairness, and adversarial robustness.