CrySP Talk: When Models Learn Too Much

I gave a talko on When Models Learn Too Much at the University of Waterloo (virtually) in the CrySP Speaker Series on Privacy (29 March 2021): Abstract Statistical machine learning uses training data to produce models that capture patterns in that data. When models are trained on private data, such as medical records or personal emails, there is a risk that those models not only learn the hoped-for patterns, but will also learn and expose sensitive information about their training data.

Read More…

Improved Estimation of Concentration (ICLR 2021)

Our paper on Improved Estimation of Concentration Under ℓp-Norm Distance Metrics Using Half Spaces (Jack Prescott, Xiao Zhang, and David Evans) will be presented at ICLR 2021. Abstract: Concentration of measure has been argued to be the fundamental cause of adversarial vulnerability. Mahloujifar et al. (2019) presented an empirical way to measure the concentration of a data distribution using samples, and employed it to find lower bounds on intrinsic robustness for several benchmark datasets.

Read More…

Virginia Consumer Data Protection Act

Josephine Lamp presented on the new data privacy law that is pending in Virginia (it still needs a few steps including expected signing by governor, but likely to go into effect Jan 1, 2023): Slides (PDF)

This article provides a summary of the law: Virginia Passes Consumer Privacy Law; Other States May Follow, National Law Review, 17 February 2021.

The law itself is here: SB 1392: Consumer Data Protection Act

Algorithmic Accountability and the Law

Brink News (a publication of The Atlantic) published an essay I co-authored with Tom Nachbar (UVA Law School) on how the law views algorithmic accountability and the limits of what measures are permitted under the law to adjust algorithms to counter inequity: Algorithms Are Running Foul of Anti-Discrimination Law Tom Nachbar and David Evans Brink, 7 December 2020 Computing systems that are found to discriminate on prohibited bases, such as race or sex, are no longer surprising.

Read More…

Microsoft Security Data Science Colloquium: Inference Privacy in Theory and Practice

Here are the slides for my talk at the Microsoft Security Data Science Colloquium:
When Models Learn Too Much: Inference Privacy in Theory and Practice [PDF]

The talk is mostly about Bargav Jayaraman’s work (with Katherine Knipmeyer, Lingxiao Wang, and Quanquan Gu) on evaluating privacy:

Fact-checking Donald Trump’s tweet firing Christopher Krebs

I was a source for thie “Pants on Fire!” fact check by PolitiFact on Donald Trump’s tweet that fired Christopher Krebs claiming that “The recent statement by Chris Krebs on the security of the 2020 Election was highly inaccurate, in that there were massive improprieties and fraud - including dead people voting, Poll Watchers not allowed into polling locations, “glitches” in the voting machines which changed…” PolitiFact: Fact-checking Donald Trump’s tweet firing Christopher Krebs, 18 November 2020

Read More…

Voting Security

I was interviewed for a local news story by Daniel Grimes on election security: UVA cybersecurity expert: Virginia is one of the safer states to cast a ballot, NBC 29 News, 21 October 2020.

Merlin, Morgan, and the Importance of Thresholds and Priors

Post by Katherine Knipmeyer Machine learning poses a substantial risk that adversaries will be able to discover information that the model does not intend to reveal. One set of methods by which consumers can learn this sensitive information, known broadly as membership inference attacks, predicts whether or not a query record belongs to the training set. A basic membership inference attack involves an attacker with a given record and black-box access to a model who tries to determine whether said record was a member of the model’s training set.

Read More…

Adversarially Robust Representations

Post by Sicheng Zhu With the rapid development of deep learning and the explosive growth of unlabeled data, representation learning is becoming increasingly important. It has made impressive applications such as pre-trained language models (e.g., BERT and GPT-3). Popular as it is, representation learning raises concerns about the robustness of learned representations under adversarial settings. For example, how can we compare the robustness to different representations, and how can we build representations that enable robust downstream classifiers?

Read More…

Intrinsic Robustness using Conditional GANs

The video of Xiao’s presentation for AISTATS 2020 is now available: Understanding the Intrinsic Robustness of Image Distributions using Conditional Generative Models Starting with Gilmer et al. (2018), several works have demonstrated the inevitability of adversarial examples based on different assumptions about the underlying input probability space. It remains unclear, however, whether these results apply to natural image distributions. In this work, we assume the underlying data distribution is captured by some conditional generative model, and prove intrinsic robustness bounds for a general class of classifiers, which solves an open problem in Fawzi et al.

Read More…

All Posts by Category or Tags.