Jobs for Humans, 2029-2059
I was honored to particilate in a panel at an event on Adult Education in the Age of Artificial Intelligence that was run by The Great Courses as a fundraiser for the Academy of Hope, an adult public charter school in Washington, D.C.
I spoke first, following a few introductory talks, and was followed by Nicole Smith and Ellen Scully-Russ, and a keynote from Dexter Manley, Super Bowl winner with the Washington Redskins. After a short break, Kavitha Cardoza moderated a very interesting panel discussion. A recording of the talk and rest of the event is supposed to be available to Great Courses Plus subscribers.
FOSAD Trustworthy Machine Learning Mini-Course
I taught a mini-course on Trustworthy Machine Learning at the 19th International School on Foundations of Security Analysis and Design in Bertinoro, Italy.
Slides from my three (two-hour) lectures are posted below, along with some links to relevant papers and resources.
Class 1: Introduction/Attacks
The PDF malware evasion attack is described in this paper:
Weilin Xu, Yanjun Qi, and David Evans. Automatically Evading Classifiers: A Case Study on PDF Malware Classifiers. Network and Distributed System Security Symposium (NDSS). San Diego, CA. 21-24 February 2016. [PDF] [EvadeML.org]
Class 2: Defenses
This paper describes the feature squeezing framework:
Google Security and Privacy Workshop
I presented a short talk at a workshop at Google on Adversarial ML: Closing Gaps between Theory and Practice (mostly fun for the movie of me trying to solve Google’s CAPTCHA on the last slide):
Getting the actual screencast to fit into the limited time for this talk challenged the limits of my video editing skills.
I can say with some confidence, Google does donuts much better than they do cookies!
Google Federated Privacy 2019: The Dragon in the Room
I’m back from a very interesting Workshop on Federated Learning and Analytics that was organized by Peter Kairouz and Brendan McMahan from Google’s federated learning team and was held at Google Seattle.
For the first part of my talk, I covered Bargav’s work on evaluating differentially private machine learning, but I reserved the last few minutes of my talk to address the cognitive dissonance I felt being at a Google meeting on privacy.
I don’t want to offend anyone, and want to preface this by saying I have lots of friends and former students who work for Google, people that I greatly admire and respect – so I want to raise the cognitive dissonance I have being at a “privacy” meeting run by Google, in the hopes that people at Google actually do think about privacy and will able to convince me how wrong I am.
JASON Spring Meeting: Adversarial Machine Learning
I had the privilege of speaking at the JASON Spring Meeting, undoubtably one of the most diverse meetings I’ve been part of with talks on hypersonic signatures (from my DSSG 2008-2009 colleague, Ian Boyd), FBI DNA, nuclear proliferation in Iran, engineering biological materials, and the 2020 census (including a very interesting presentatino from John Abowd on the differential privacy mechanisms they have developed and evaluated). (Unfortunately, my lack of security clearance kept me out of the SCIF used for the talks on quantum computing and more sensitive topics).
Can Machine Learning Ever Be Trustworthy?
I gave the Booz Allen Hamilton Distinguished Colloquium at the University of Maryland on Can Machine Learning Ever Be Trustworthy?.
Mutually Assured Destruction and the Impending AI Apocalypse
I gave a keynote talk at USENIX Workshop of Offensive Technologies, Baltimore, Maryland, 13 August 2018.
The title and abstract are what I provided for the WOOT program, but unfortunately (or maybe fortunately for humanity!) I wasn’t able to actually figure out a talk to match the title and abstract I provided.
The history of security includes a long series of arms races, where a new technology emerges and is subsequently developed and exploited by both defenders and attackers. Over the past few years, “Artificial Intelligence” has re-emerged as a potentially transformative technology, and deep learning in particular has produced a barrage of amazing results. We are in the very early stages of understanding the potential of this technology in security, but more worryingly, seeing how it may be exploited by malicious individuals and powerful organizations. In this talk, I’ll look at what lessons might be learned from previous security arms races, consider how asymmetries in AI may be exploited by attackers and defenders, touch on some recent work in adversarial machine learning, and hopefully help progress-loving Luddites figure out how to survive in a world overrun by AI doppelgängers, GAN gangs, and gibbon-impersonating pandas.
DLS Keynote: Is 'adversarial examples' an Adversarial Example?
I gave a keynote talk at the 1st Deep Learning and Security Workshop (co-located with the 39th IEEE Symposium on Security and Privacy). San Francisco, California. 24 May 2018
Abstract
Over the past few years, there has been an explosion of research in security of machine learning and on adversarial examples in particular. Although this is in many ways a new and immature research area, the general problem of adversarial examples has been a core problem in information security for thousands of years. In this talk, I’ll look at some of the long-forgotten lessons from that quest and attempt to understand what, if anything, has changed now we are in the era of deep learning classifiers. I will survey the prevailing definitions for “adversarial examples”, argue that those definitions are unlikely to be the right ones, and raise questions about whether those definitions are leading us astray.
Lessons from the Last 3000 Years of Adversarial Examples
I spoke on Lessons from the Last 3000 Years of Adversarial Examples at Huawei’s Strategy and Technology Workshop in Shenzhen, China, 15 May 2018.
We also got to tour Huawei’s new research and development campus, under construction about 40 minutes from Shenzhen. It is pretty close to Disneyland, with its own railroad and villages themed after different European cities (Paris, Bologna, etc.).
Huawei’s New Research and Development Campus [More Pictures]
Feature Squeezing at NDSS
Weilin Xu presented Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks at the Network and Distributed System Security Symposium 2018. San Diego, CA. 21 February 2018.
Paper: Weilin Xu, David Evans, Yanjun Qi. Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks. NDSS 2018. [PDF]
Project Site: EvadeML.org