Center for Trustworthy Machine Learning

The National Science Foundation announced the Center for Trustworthy Machine Learning today, a new five-year SaTC Frontier Center “to develop a rigorous understanding of the security risks of the use of machine learning and to devise the tools, metrics and methods to manage and mitigate security vulnerabilities.”

The Center is lead by Patrick McDaniel at Penn State University, and in addition to our group, includes Dan Boneh and Percy Liang (Stanford University), Kamalika Chaudhuri (University of California San Diego), Somesh Jha (University of Wisconsin) and Dawn Song (University of California Berkeley).
Artificial intelligence: the new ghost in the machine
Engineering and Technology Magazine (a publication of the British [Institution of Engineering and Technology]() has an article that highlights adversarial machine learning research: Artificial intelligence: the new ghost in the machine, 10 October 2018, by Chris Edwards.

Although researchers such as David Evans of the University of Virginia see a full explanation being a little way off in the future, the massive number of parameters encoded by DNNs and the avoidance of overtraining due to SGD may have an answer to why the networks can hallucinate images and, as a result, see things that are not there and ignore those that are.
…
He points to work by PhD student Mainuddin Jonas that shows how adversarial examples can push the output away from what we would see as the correct answer. “It could be just one layer [that makes the mistake]. But from our experience it seems more gradual. It seems many of the layers are being exploited, each one just a little bit. The biggest differences may not be apparent until the very last layer.”
…
Researchers such as Evans predict a lengthy arms race in attacks and countermeasures that may on the way reveal a lot more about the nature of machine learning and its relationship with reality.
Violations of Children’s Privacy Laws
The New York Times has an article, How Game Apps That Captivate Kids Have Been Collecting Their Data about a lawsuit the state of New Mexico is bringing against app markets (including Google) that allow apps presented as being for children in the Play store to violate COPPA rules and mislead users into tracking children. The lawsuit stems from a study led by Serge Egleman’s group at UC Berkeley that analyzed COPPA violations in children’s apps. Serge was an undergraduate student here (back in the early 2000s) – one of the things he did as a undergraduate was successfully sue a spammer.
USENIX Security 2018
Three SRG posters were presented at USENIX Security Symposium 2018 in Baltimore, Maryland:
- Nathaniel Grevatt (GDPR-Compliant Data Processing: Improving Pseudonymization with Multi-Party Computation)
- Matthew Wallace and Parvesh Samayamanthula (Deceiving Privacy Policy Classifiers with Adversarial Examples)
- Guy Verrier (How is GDPR Affecting Privacy Policies?, joint with Haonan Chen and Yuan Tian)
![]() |
![]() |
![]() |
![]() |
There were also a surprising number of appearances by an unidentified unicorn:
Your poster may have made the cut for the #usesec18 Poster Reception, but has it received the approval of a tiny, adorable unicorn? @UVA #seenatusesec18 #girlswhocode #futurecomputerscientist #dreambig pic.twitter.com/bZOO6lYLXK
Mutually Assured Destruction and the Impending AI Apocalypse
I gave a keynote talk at USENIX Workshop of Offensive Technologies, Baltimore, Maryland, 13 August 2018.
The title and abstract are what I provided for the WOOT program, but unfortunately (or maybe fortunately for humanity!) I wasn’t able to actually figure out a talk to match the title and abstract I provided.
The history of security includes a long series of arms races, where a new technology emerges and is subsequently developed and exploited by both defenders and attackers. Over the past few years, “Artificial Intelligence” has re-emerged as a potentially transformative technology, and deep learning in particular has produced a barrage of amazing results. We are in the very early stages of understanding the potential of this technology in security, but more worryingly, seeing how it may be exploited by malicious individuals and powerful organizations. In this talk, I’ll look at what lessons might be learned from previous security arms races, consider how asymmetries in AI may be exploited by attackers and defenders, touch on some recent work in adversarial machine learning, and hopefully help progress-loving Luddites figure out how to survive in a world overrun by AI doppelgängers, GAN gangs, and gibbon-impersonating pandas.
Cybersecurity Summer Camp
I helped organize a summer camp for high school teachers focused on cybersecurity, led by Ahmed Ibrahim. Some of the materials from the camp on cryptography, including the Jefferson Wheel and visual cryptography are here: Cipher School for Muggles.
Cybersecurity Goes to Summer Camp. UVA Today. 22 July 2018. [archive.org]Earlier this week, 25 high school teachers – including 21 from Virginia – filled a glass-walled room in Rice Hall, sitting in high adjustable chairs at wheeled work tables, their laptops open, following a lecture with graphics about the dangers that lurk in cyberspace and trying to figure out how to pass the information on to a generation that seems to share the most intimate details of life online. “I think understanding privacy is important to that generation that uses Facebook and Snapchat,” said David Evans, a computer science professor who helped organize the camp. “We hope to give teachers some ideas and tools to get their students excited about learning about cryptography, privacy and cybersecurity, and how these things can impact them.”
Dependable and Secure Machine Learning
I co-organized, with Homa Alemzadeh and Karthik Pattabiraman, a workshop on trustworthy machine learning attached to DSN 2018, in Luxembourg: DSML: Dependable and Secure Machine Learning.
![]()
DLS Keynote: Is 'adversarial examples' an Adversarial Example?
I gave a keynote talk at the 1st Deep Learning and Security Workshop (co-located with the 39th IEEE Symposium on Security and Privacy). San Francisco, California. 24 May 2018
Abstract
Over the past few years, there has been an explosion of research in security of machine learning and on adversarial examples in particular. Although this is in many ways a new and immature research area, the general problem of adversarial examples has been a core problem in information security for thousands of years. In this talk, I’ll look at some of the long-forgotten lessons from that quest and attempt to understand what, if anything, has changed now we are in the era of deep learning classifiers. I will survey the prevailing definitions for “adversarial examples”, argue that those definitions are unlikely to be the right ones, and raise questions about whether those definitions are leading us astray.
Wahoos at Oakland
UVA Group Dinner at IEEE Security and Privacy 2018
Including our newest faculty member, Yongwhi Kwon, joining UVA in Fall 2018!
Yuan Tian, Fnu Suya, Mainuddin Jonas, Yongwhi Kwon, David Evans, Weihang Wang, Aihua Chen, Weilin Xu
## Poster Session
Fnu Suya (with Yuan Tian and David Evans), Adversaries Don’t Care About Averages: Batch Attacks on Black-Box Classifiers [PDF]
Mainuddin Jonas (with David Evans), Enhancing Adversarial Example Defenses Using Internal Layers [PDF]![]()
![]()
Lessons from the Last 3000 Years of Adversarial Examples
I spoke on Lessons from the Last 3000 Years of Adversarial Examples at Huawei’s Strategy and Technology Workshop in Shenzhen, China, 15 May 2018.
We also got to tour Huawei’s new research and development campus, under construction about 40 minutes from Shenzhen. It is pretty close to Disneyland, with its own railroad and villages themed after different European cities (Paris, Bologna, etc.).
Huawei’s New Research and Development Campus [More Pictures]