I had a chance to visit the White House for a Roundtable on Accelerating Responsible Sharing of Federal Data. The meeting was held under “Chatham House Rules”, so I won’t mention the other participants here.
The meeting was held in the Roosevelt Room of the White House. We entered through the visitor’s side entrance. After a security gate (where you put your phone in a lockbox, so no pictures inside) with a TV blaring Fox News, there is a pleasant lobby for waiting, and then an entrance right into the Roosevelt Room.
Five students from our group presented posters at the department’s
Anshuman Suri's Overview Talk
I taught a mini-course on Trustworthy Machine Learning at the 19th International School on Foundations of Security Analysis and Design in Bertinoro, Italy.
Slides from my three (two-hour) lectures are posted below, along with some links to relevant papers and resources.
Class 1: Introduction/Attacks The PDF malware evasion attack is described in this paper:
Weilin Xu, Yanjun Qi, and David Evans. Automatically Evading Classifiers: A Case Study on PDF Malware Classifiers.
(Cross-post by Bargav Jayaraman)
With the recent advances in composition of differential private mechanisms, the research community has been able to achieve meaningful deep learning with privacy budgets in single digits. Rènyi differential privacy (RDP) is one mechanism that provides tighter composition which is widely used because of its implementation in TensorFlow Privacy (recently, Gaussian differential privacy (GDP) has shown a tighter analysis for low privacy budgets, but it was not yet available when we did this work).
Bargav Jayaraman presented our paper on Evaluating Differentially Private Machine Learning in Practice at the 28th USENIX Security Symposium in Santa Clara, California.
Summary by Lea Kissner:
Hey it's the results! pic.twitter.com/ru1FbkESho
— Lea Kissner (@LeaKissner) August 17, 2019 Also, great to see several UVA folks at the conference including:
Sam Havron (BSCS 2017, now a PhD student at Cornell) presented a paper on the work he and his colleagues have done on computer security for victims of intimate partner violence.
Brink News (a publication of the The Atlantic) published my essay on the risks of deploying AI systems.
Artificial intelligence technologies have the potential to transform society in positive and powerful ways. Recent studies have shown computing systems that can outperform humans at numerous once-challenging tasks, ranging from performing medical diagnoses and reviewing legal contracts to playing Go and recognizing human emotions.
Despite these successes, AI systems are fundamentally fragile — and the ways they can fail are poorly understood.
I’m back from a very interesting Workshop on Federated Learning and Analytics that was organized by Peter Kairouz and Brendan McMahan from Google’s federated learning team and was held at Google Seattle.
For the first part of my talk, I covered Bargav’s work on evaluating differentially private machine learning, but I reserved the last few minutes of my talk to address the cognitive dissonance I felt being at a Google meeting on privacy.
The New York Times has an article, How Game Apps That Captivate Kids Have Been Collecting Their Data about a lawsuit the state of New Mexico is bringing against app markets (including Google) that allow apps presented as being for children in the Play store to violate COPPA rules and mislead users into tracking children. The lawsuit stems from a study led by Serge Egleman’s group at UC Berkeley that analyzed COPPA violations in children’s apps.
Three SRG posters were presented at USENIX Security Symposium 2018 in Baltimore, Maryland:
There were also a surprising number of appearances by an unidentified unicorn: